So, after GCP and AWS it was my time to get my hands dirty with Azure. When I initially heard Microsoft is jumping onto the bandwagon of public cloud vendors,  it seemed like a wannabe. GCP itself had a lot of catch up to be done on with AWS. IBM and Oracle have been trying it for long but they have settled for a different genre of customers e.g. Oracle is primarily focusing only on deploying its own legacy software like e-business suite and modern on-premise software like Oracle Fusion into its Oracle Public Cloud instead of trying to get a market share from AWS or GCP. IBM has managed to get some real customers like EA but not quite there yet.
On the other hand, I think Microsoft Azure has come from behind and not only given some serious challenges to GCP and AWS but also capturing a major market share with its unique offerings in Machine Learning services and fantastic partnerships and support with corporations.

With whatever I have explored so far, I have mixed opinion about it. It would be really great if the following concerns are addressed soon by Azure

    •  Console - Azure portal lists a great portfolio of services but the look and feel seemed like a little bit of share point or office 365. It makes sense they have re-used the web components there but I really like clean UI and responsive feedback of GCP console for asynchronous activities like creating a resource.
      • HD Insights -
        1. Minimum Cores -  The minimum instances (3) and cores (~8) that you have to run even to get a POC done or run a dev environment is too high. Since HD Insights is an always billed service (as opposed to pay-per-use) it may not be worth for someone just testing/developing their services before deploying to production where the minimum capacity restriction makes sense. I don't know about GCP's data proc, but I should be able to get a cheaper/smaller cluster.
        2. Preserve State - There is no way you can preserve the state of the cluster if you want to stop and then restart it later when you resume your development work. You just have to delete everything if you don't want to burn cash when you are not using it. I had to re-create the whole cluster all over again when I got time to work on the project. I wish I could just stop the cluster without deleting it and just pay the nominal price for the storage. Like EC2 and Compute Engine in GCP
      • No Free Tier for storage - In Azure, I could not find any free tier database or storage which is always free after you run out of the credits. GCP's datastore and BigQuery's free tier had been really useful to try out things without really burning cash. In Azure, cosmosDB was chargeable as soon as you reserve capacity (unlike GCP's Datastore where its free if you are within a specific read/write rate).
      • Small Size Service Bus Queues -  Azure has several messaging services but I found the maximum size of a message can only be 1mb in Service Bus Queues, for storage queues its even smaller i.e. 64kb. This is very less as compared to Google Cloud Pubsub's 10mb message size limit.
      Apart from above initial complaints, I really observed some good things about the Azure Cosmos DB I am yet to use it in high scale but it has the following feature which could be differentiating from other managed NoSQL offerings like DynamoDB or Cloud Datastore.

      • Financially backed SLA - Azure Cosmos DB has SLAs on latency which is also financially backed, i.e. Microsoft will pay you back if they don't meet the SLA. I have not seen it anywhere else so far.
      • Multiple Interfaces - Azure Cosmos DB is not just a NoSQL Mongo DB. Once can provision it with a data model of SQL or Cassandra DB as well. Since this is horizontally scalable and you can just increase the capacity without having to commit for a lot of capacity up front.