Building your first cloud service can be a steep learning curve. When your project is expecting you to present some measurable results, there may be no time to get acquainted with all the best practices around. However, there are certain things that you should be aware of and that should be done properly right from the beginning to avoid future disappointments, out-of-control further development costs and even data breaches.
One of the laws of software development is that the decisions made at the beginning of your project lock things in place at least to some degree. Some technical choices may still be easy to change afterwards, but often the chosen solutions can be hard to drop later on. For instance, certain changes related to the infrastructure are extremely hard to carry out post hoc, meaning that these things should be done correctly already during the early stages of development. If best practices haven’t been followed in the implementation of the cloud infrastructure, it may generate technical debt that can be difficult to manage and has an adverse effect on the further development of your software development project.
Experiences gathered from various client projects can prove valuable when working on an extensive cloud service with several developers and a long projected lifespan. This text will focus on giving advice on how to use the two largest cloud service providers, AWS and Azure.
Create a separate account for each cloud environment
Software development often calls for several cloud environments. A common practice is to have one environment for testing and another for production. As the name suggests, the test environment is used for testing and quality control, whereas the production environment is meant for the actual end users. Sometimes there’s a need for more than one test environment. If that’s the case, one environment may serve the needs of software development, while another is used by the organization’s test users, for example.
The test environment needs to resemble the production environment as closely as possible. This way the issues encountered in production can be reproduced in the test environment, which is a significant help when trying to get to the root of a problem. Having similar test and production environments usually ensures that software updates confirmed in the test environment also work on the production side.
While ensuring the similarity between the test environment and the production environment, you also want to make sure that the test environment doesn’t affect the production side – or vice versa. For this reason, the cloud resources of the test and production environments need to be separated in one way or another. The main thing to ensure is that the different environments of the software do not share any of their cloud resources. Instead, the database of the test environment, for example, needs to run on a different database service than that of the production environment.
Separating resources in a cloud environment
Cloud service providers offer several ways of managing and separating environments. A popular way is to divide cloud resources into Resource Groups. This is supported by both AWS and Azure. By doing this, the cloud resources required by the production environment are clearly separated from test resources. Both AWS and Azure offer the possibility of limiting access to the Resource Groups, which makes it possible to grant a different number of access rights to the production environment and to the test environment. Other alternatives for dividing resources include things such as tags or using the internal staging/production functionality of the cloud service component (e.g. Azure App Service staging slot).
A foolproof method for separating software cloud environments is to create a separate account for each of the environments. In the world of AWS, this means one AWS Account per environment and, in Azure, a separate Subscription for each environment. This way, the production-related cloud resources are managed through one account and each of the test environments via its own account.
One environment per account offers unrivalled benefits
When each of the cloud environments has its own account, it’s impossible to mix the cloud resources of the test environments with those of the production side. It’s not unusual to have test environments using a production environment database due to a faulty configuration, which may cause problems for the production environment as well. Dividing the environments into separate accounts prevents – or at least significantly reduces – the risk of encountering these types of problems.
When the cloud environments are linked to the same account, all users are quite often given access to all the cloud resources linked with the account. However, according to the principle of least privilege, access to production resources should be limited only to those users who really need it. Having cloud environments divided into different accounts ensures that granting access to a production environment always requires a conscious decision.
The one environment per account model also has other benefits apart from isolation, environment-specific invoicing being one of them. If there’s no more need for a test environment, for example, the account can be confidently removed without having to worry about affecting the functioning of production or other environments. Cloud services also have certain account constraints (AWS, Azure) that can be easily avoided by dividing the environments into different accounts.
Creating a separate account for each cloud environment required by the software may also present some challenges. While it may make user management clearer, the users need to be given access to several different environments. Since the invoicing of cloud services is based on the use of account resources, credit card information needs to be separately added for each individual account. Various solutions, such as AWS Consolidated Billing, can help manage this problem.
Provisioning cloud resources automatically
Cloud service providers usually offer elegant interfaces that allow you to quickly employ your cloud services. However, this may be a pitfall and something to avoid.
When developing new software, the first thing to be built is usually the production environment. Later on – maybe only after the software has been rolled out – someone realizes that a test environment may also be required. Then someone else may point out that there’s a need for several different test environments. If the software has been built using the interfaces offered by the cloud service provider, building an identical test environment later may prove troublesome. And what if you decide to make changes to the production environment infrastructure at a later stage – do you think will you remember to update all the test environments too?
Even big organizations, such as Visma, have begun to realize that it doesn’t make sense to provision and maintain cloud services manually. Google’s State Of Devops 2019 report states that most top-performing organizations provision their cloud environments automatically. The automatic provisioning of cloud resources is one of the factors that helps to increase software development teams’ productivity in the long term.
Infrastructure as Code means fewer mistakes and better information security
The automatic provisioning of the cloud service environment and its maintenance through coding (Infrastructure as Code, IaC) automates cloud environment updates, enables reviews, reduces the chance for human error and makes the cloud service safer. If control of the cloud environment is lost due to a security breach, for example, a similar environment can be recreated from scratch. This makes Disaster Recovery significantly quicker compared to a situation where the cloud environment is maintained manually.
The Infrastructure as Code approach uses a template to configure and launch the cloud services required by the software. The service can be any cloud service, such as a virtual machine, virtual network, load balancer, or a database. When changes are made to the configuration file, the changes are reflected in the cloud environment. This makes managing several different cloud environments significantly easier.
A parameterized configuration file can be automatically run into all the environments, which significantly reduces the risk of human error. Updating a cloud environment using a configuration file ensures that the different environments stay as identical as possible. This is particularly useful for troubleshooting and information security since all environments basically have the same level of protection.
Finally, if you want to get acquainted with cloud environment coding tools and wish to learn more about them, I recommend looking up some of the more popular choices. These include tools such as Cloud Formation (AWS), Azure Resource Manager (Azure), Cloud Deployment Manager (Google Cloud Platform), and the all-round Terraform.