Autofac on Azure Functions.
Azure still keeps evoling a lot. The next new and hot thing is Azure Functions.
What makes Azure Function attractive?
In the past six months I did have a look onto Azure Functions more often, but was on one hand pretty busy with a move to a different company and on the other hand the static nature of every single Azure Function did raise some questions marks above my head. With the beta of Durable Functions, Azure Functions again got my attention. It makes some really time consuming implementation topics a very simple thing.
My new company evaluated Azure Service Fabric which offers a lot of possibilities but has one drawback: Integration in the infrastructure of Azure must be done on one’s own. In my experience this is not anything that is not doable, but it does costs a significant time to master infrastructural topics.
Here Azure functions really simplify greatly. Let’s have a short look onto how Azure Functions handle infrastructural topics before we dive into the topic that is actually given by the title of this article.
Infrastructure handling is greatly simplified by Azure Functions
Let’s say you want to send messages to Service Bus. To add some meat to the messages, the content of each and every will be a random superhero name. These messages shall be retrieved by another function and persist the content in a Azure Cosmos Db instance.
Azure Functions shall be short by design. And Microsoft does some pretty smart things for it. The function itself can focus on the actual task: Creating the content of the message. For sending a message to a bus instance, only configuration and declarative work needs to be done.
This line specifies that the return value will be a message. This message is going to be sent to “SampleQueue” in the connection that is specified in local.settings.job file. The developer does not have to handle creation of the queue instance, handling timeouts, serializing messages or retry policies. Just focus on the content. Shortly the sample to store in CosmosDb:
Azure Functions allows for triggering a function in several ways, like HttpTrigger, TimeTrigger, Storage Queue, Blob and Table, Service Bus, CosmosDb and even more. Again the integration is done by attributes and consists mainly of the same information:
The name of the queue, permission to access and the connection information. No handling of connections, retrieval. It is even not necessary to handle the message status after working with the content. The message will get completed after successful run of the function. Writing to CosmosDb is also very simple.
The information that needs to be provided is pretty comparable to ServiceBus. Not really surprising, it needs a connection and some specific information like the queue and documents list names. Again just focus on the content and return whatever needs to be persisted, Azure Function infrastructure will take care of it.
The drawback: the static nature of Azure Functions
Pretty amazing. Let’s move on to the actual topic of this article. Usually a certain piece of functionality in production will be larger than the few lines of code. It will rely on functionality that maybe shall be shared between functions, or just uses complex or just more logic. When you are used to the simplicity of dependency injection the static nature of Azure Functions can be considered as a great hurdle.
The solution: Azure Functions extensibility
Luckily, Azure Functions offers great extensibility. The same mechanism that provides the flexibility of the attributes shown before can be used to provide dependency injection. Starting from samples provided by Microsoft, freely available in GitHub, it is pretty easy to put together the necessary pieces. Have a look here onto the following resources for a startup:
Azure Webjobs SDK Extension: Provides sample how to integrate custom bindings and triggers
Azure WebJobs SDK: The complete sources of Azure Functions implementation.
Starting with the extensions, let’s follow the way Azure Function infrastructure does use on its own: Using attributes to offer functionality.
The first attribute tells the framework that it is going to be used in parameters of an Azure Function. The second tells the usage as a binding. There are several ways how to integrate this attribute. For dependency injection, it is necessary to know what the target type of the parameter will be. There may be bindings that are more simple where the target type is known in front. In that case, adding rules and using converters will be sufficient. For knowning the target type, there is the need to use IBinding and IBindingProvider.
Actually there are two points of interest being provided in constructor of the InjectAttributeBinding implementation. The first is the parameterInfo. Which that information it is possible to find out the target type that shall be resolved. The other is the ObjectResolver instance. This class is an abstraction to resolve components from AutoFac via a ServiceLocator.
Having the Binding in place, the next interface to be implemented is the IBindingProvider. The implementation ensures that the context is available and that the context contains the attribute in question. When it is available, it gets an IObjectResolver instance from the ServiceLocator and passes it over to the Binding together with the ParameterInfo in question that is retrieved by the context.
The next necessary step is to make this available to the Azure Functions framework. Azure Functions itself will search for a certain provider when it recognizes an unknown attribute in a function’s constructor. It then searches for an IExtensionConfigProvider implementation that may can help to resolve the attribute in question.
This class is responsible for registering the InjectAttributeBindingProvider described before. Additionally this class is the perfect point in time to register all necessary services to AutoFac. How is that done? To be honest, it is kept very simple in this very first implementation. Let’s consider the reasons for the actual approach.
Things to be considered:
- Azure Functions Framework will call Azure Functions directly and autonomous. To not force the developer to find a place where he needs to configure the necessary services, the implementation will search for a certain implementation of an interface, following the architecture of Azure Functions.
- Autofac allows for different kinds of registration. To unify how registrations are done and to separate dependency injection services by topic, modules have been chosen. It should not be any issue to push different kinds of registrations.
For initialization of the Autofac this simple interface is available.
You just need to create a class like this to initialize Autofac:
The module itself just adds exactly one service:
To use this dependency injection service just add the necessary services marked with an [Inject] attribute to your functions parameter list.
Putting the Inject attribute and the service you are looking forward should do the magic.
This is what happens when you start the solution, certainly having Visual Studio 2017 installed. That’s the first version that does support Azure Functions.
The first function that uses the Inject attribute will cause the framework to load up the extensions. The function provided by the sample is a simple http trigger function. Use the url above to do a short test of the functionality.
Copy the url and put it in the browser of your choice. You can see, the test method returns a string. Works!
Azure Functions’ static nature can be easily enhanced by the provided infrastructure of the framework. Let me just mention some different solutions out there:
- Using a different approach with initialization within the function: devkimchi
- Excellent sample using the same approach with Unity: wille-zone
Finally find all sources in this repository. Nuget package will follow as well as tests. Enjoy!