After a really long break, I have decided to resume the posts on my blog. This is something that I enjoy doing. Also, it helps to truly solidify the knowledge I get in my learning process.
Last time, I was writing about the Azure Well-Architected Framework. In this post, I am going to write about another concept proposed in it: The Performance Efficiency.
Performance Efficiency
Performance is one of those non-functional or quality attributes requirements that is always in every stakeholder’s mind. Alongside security, this is one of the most common requirements every customers expect from any application.
Users want to have fast applications, responses from their request with no delay, processes that takes less time to complete and so on. This makes that thinking in performance is one of the topics that must in our minds all the time. A great application with a poor performance could drive a business fail because users refuse to interact with the app or just use the competitors’ one.
To me, what matters the most about any non-functional or quality attribute requirement is defining it clearly and in a way that is measurable, so you can test your application to see whether it is fulfilling a specific requirement. One way to do this is via scenarios. For example, you can write a performance scenario like: “the user must be logged in into the application in less than 3 seconds”. This clarity provides good input for the architecture; plus you can use a fitness function or something similar to test the non-functional requirements and ensure that your app is fitting its expectations.
Now, getting back to the concept proposed by the Well-Architected Framework, in regards to Performance Efficiency it says: “Performance efficiency is matching the resources to the demand of your services”. So, let’s dig dive into how to accomplish this.
The proposed patterns are the following:
- Leverage scaling up and scaling out.
- Optimize network performance.
- Optimize storage performance.
- Identify bottlenecks in your application.
Leverage scaling up and scaling out
Taking into account the following:
Scaling up and down refers to when more resources are added to the server. For instance, when you add more memory or CPU to your virtual machine. Scaling out and in is the process of adding or removing instances of your application. For instance, when you add an additional instance of a service to handle the workload your application is receiving.
Having those concepts in mind, Azure offers a quite nice Autoscaling functionality to scale in and out your application service. By setting threshold value that tells Azure service when to scale up and / or down, you can release your mind of when to manually create new instances of your apps.
Of course, everything has a tradeoff. You must take into account the state of your services, so in case an instance of it goes away, the state of the process it was providing is stored a mechanism so that another instance can take the state and continue forward with the specific process. As an example, if your service provides authentication to users, the state of the authenticated users must be stored in a database or something similar, so it is not tightly couple to a specific instance.
Optimize Network Performance
In the cloud (depending on the scenario), it is quite common to have services distributed world wide. Also, some business requires presence in different places of the globe (I have experienced this before). So, the goal of optimizing network performance is to reduce the latency in the communication in two areas: between your services and between your services and your users. Let’s dig deeper into this.
Latency between services
Let’s say that you have a database allocated in a particular Azure region to users world wide. That could be problematic. For an scenario like this, some alternatives are available:
- Create a replica of the database in other regions. Hence, the data must be sync between regions. In Azure, you can use Azure SQL Data Sync for this purposes.
- Use a globally distributed database system, such as Cosmos DB. This way you can do reading and writing to the database regardless of the location.
- Also, you can use a Cache mechanism to store frequently used data. In Azure you have Azure Redis Cache that fits that specific purpose.
The decision to process to choose any of the above alternatives or any other that you may need, depends on your architecture and the context of your solution.
Latency between users and Azure resources
Azure offers some alternatives to tackle this scenario. I am going to name a few, but for sure much more are available.
- DNS load balancer: This allows you to distribute traffic across Azure Regions. Azure has Azure Traffic Manager that do exactly this. It can route users based on specific configuration, such as Priority, Weight, Performance of your services and Geography location of the users and services.
- Use a CDN solution to deliver static content. Content Delivery Network (CDN) allows you to distribute content world wide. The Azure solution for this is Azure CDN. The idea is to deploy your content globally to a group of servers, so users get that content from the closest server.
- Use Azure ExpressRoute to create a private and dedicated connectivity between your network and Azure.
Optimize Storage Performance
In this topic, there two areas to consider:
Optimize Virtual Machine storage performance
If you go with the option of having your Virtual Machine workload in the cloud, it is important to ensure the performance of them. Storage is probably one of the main aspects to consider. So, Azure offers some options to improve the storage performance of your VMs:
- Local SSD Storage: It improves performance, but it work better for temporary data. This kind of storage is volatile, so if your VM is under maintenance or it is redeploy, the data may be lost.
- Standard Storage SSD: It is very for performance but provides low throughput.
- Premium Storage SSD: This is well suited for production workload that require reliability and low latency.
- Standard Storage HDD: This is more recommended for non-production environments.
- Disk striped: It can be used to improve even further the performance of your VMs. It basically work by spreading the disk activity across multiple disks.
Optimize Storage Performance for your application
Two main options are available:
- Caching: You can place a cache mechanism between your application and the database or data store your are using. Azure offers Azure Redis Cache for this.
- Polyglot Persistency: Your application can use different types of data store, based on your needs and what fits better your requirements. Azure, among others, has the following options: Blob storage, Non-SQL Databases and Relational Databases.
Of course, depending on your situation, there might be many other areas you need to focus on to provide the proper performance that is required to your app. It is important to have in place the proper monitoring mechanisms that allows you to understand those areas that can potentially be improved, then tools, patterns and solutions are available to tackle your problems.