Two-part series on how to create from scratch and convert existing Azure WebJobs from .NET framework to .NET Core

Image for post
Image for post

For a time, there was a long standing debate over how our new and shiny microserservices would get deployed to the cloud. Do we go with an Azure WebJob or these new Azure Functions? Is this code going to take longer than 5 minutes to do its job or do I need the longer compute time just in case? At the time we were considering these questions and many more, all of our code was still being written in .NET framework and AzureF(x) V1 was just released. We also were still very new to the concept of microservices and how to properly implement them considering the impending doom of our monolith. Needless to say much has changed over the past few years in regards to both of these services. It seems Azure Functions are winning the fight and Azure WebJobs are being phased out. …


Refresh your cache from CosmosDB data automatically all with function bindings; considering cache reads are cheaper than the RUs used for Cosmos.

Image for post
Image for post
Azure CosmosDB… I love this logo :D

Why is this necessary?

I am a big fan of using CosmosDb, specifically the out of the box SQL implementation. It’s fast, easy to use, handles batching, robust client library and has turnkey distribution built in. I’ve hosted a meetup a couple years back with a fellow colleague to discuss a lot of the benefits at length. As much as I love using it, the price adds up fast. Cosmos is billed in Request Units (RU/s) that cost a certain allotment per read and write. Pricing page can be found here. Here’s a quick summary of how it works per Microsoft:

Provisioned throughput is expressed in Request Units per second (RU/s), which can be used for various database operations (e.g., inserts, reads, replaces, upserts, deletes, queries, etc.). For example, 1 RU/s is sufficient for processing one eventually consistent read per second of 1K item, and 5 RU/s is sufficient for processing one write per second of 1K item. …


Restore, build, test, pack and push. What I figured out to be the easiest setup for releasing a Nuget package that supports multiple frameworks. Best for Nuget packages that are multi-targeted.

Image for post
Image for post

Wait? Why not use Azure DevOps?

Let me preface this article by saying, if I had a preferred method to run a CI pipeline it would be through Azure pipelines considering it is way smoother for the .NET environment. I’m sure you can almost mimic a lot of what Azure does with different scripts and install steps in Travis but Azure is usually the way to go. The reason why I found myself here using Travis is I am contributing to a project that has all their pipelines for the different language packs built in Travis CI. …


Convert existing .NET Framework Webjobs to .NET Core

This article is a continuation from my previous post. Most of the work/setup will be taken from this post so if you’re not familiar with it refer here.

Github to the core files can be found here: GitHub Repo

Branch that includes the Framework project can be found here

Analyze your existing project

Typically when we create a new service, a lot of the business logic and functionality will be abstracted away from the main function and only referenced either via Nuget packages or other projects in the solution. The beauty of .NET Core and .NET Framework is that they are both backwards compatible with the trusty .NETStandard SDK. If you need to make that library multi-targeted, I will be posting an article on how to do later on. For now, lets focus on converting an old .NET Framework webjob to .NET Core. …

About

David Maman

Software Engineer by trade, tech geek for ever

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store