DevOps methodologies are often represented with the help of a multi-step life cycle. Among them, monitoring is an obvious standout: it’s what allows us to measure and evaluate the performance, stability and usages of our application. As well, in today’s context of distributed application development and micro-services, the importance of monitoring is further justified.
For a software development company such as GSoft, the initial creative step (the code) is often brought to the fore. It is our strength, after all, our bread and butter. On the other hand, how do we ensure that the code we produce works just as well in production, in the hands of our clients?
To err is human
In the beginning, our development team at ShareGate: Apricot experienced a situation many of us are familiar with: the infamous problem that only appears in production. The application was suffering from performance issues and was causing slowdowns. Despite the strong heads assigned to the file, it was difficult if not impossible to reproduce the particularities of the production environment and really understand what was going on.
A real eye-opener
This slightly embarrassing situation brought a growing awareness: we needed more visibility. In order to dissect the problem and isolate the offender, we added instrumentation to various strategic points in our code. This instrumentation allowed us to monitor performance right at the source and determine the potential bottlenecks.
Another aspect of the application that deserved a bit more surveillance was the state of our infrastructure. We had no warning system for overloaded resources, a potential part of our performance issues.
The initial results of our monitoring implementation were quite promising. They allowed us to rapidly put the finger on what went wrong, and to then take much clearer decisions on the type of infrastructure required for the application we were developing. It also helped us realize: we absolutely needed a better long-term vision for our monitoring needs.
Our practices, today
A key lesson we acquired over time was that collecting information is one thing. Actually understanding and analyzing it, however, is another! In order to help ourselves, we put some effort into improving the context of our monitoring, that is to say to correlate the information with various useful metrics, such as application logs.
Another interesting aspect of monitoring we noticed was that its value goes far beyond the development and operations teams. For example, managers can more easily determine product performance with the help of service-level agreements (SLA) and response times in case of incidents.
What’s worth remembering
“Data is cheap. Data is king.”
Monitoring is more accessible than you’d think. Multiple tools exist to collect information, validate and insure the value we deliver to clients.
Start simple and measure it.
In a product’s infancy, you need to avoid over-engineering it. Rather, opt for simplicity and implement monitoring as soon as possible. This allows you to better justify your investments with tangible data. Bonus: product maintenance and operational demands are facilitated from the very beginning.
A last word
In a world of microservices and distributed applications, developmental simplicity often acts to the detriment of operational simplicity. In this world, monitoring is a powerful tool to consider.