Using Technology to enable and transform Business Strategy

IT Strategy

Subscribe to IT Strategy: eMailAlertsEmail Alerts newslettersWeekly Newsletters
Get IT Strategy: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn


IT Strategy Authors: Anders Wallgren, Peter Silva, Ashish Nanjiani, Elizabeth White, Colin Ritchie

Related Topics: IT Strategy, DevOps Journal

DevOpsJournal: Article

Fighting Deployment Anxiety: 5 Tips from the Pros

Fighting Deployment Anxiety: 5 Tips from the Pros

Read the full post here.

You know the feeling: You’ve put in hours of work developing a new feature. It has gone through QA, been tested and passed all stakeholder inspections, and now you’re ready to deploy. Despite your experience and preparation, you have a pit in your stomach. What if something goes wrong and it brings your app to its knees? What if it doesn’t function in your production environment the way it’s supposed to? We’ve all been there. Fortunately, we also have some ways to help.

Electric Cloud CTO Anders Wallgren recently participated on a webinar with Gene Kim, CTO, researcher and co-author of “The Phoenix Project” and “The DevOps Handbook.” Hosted by DevOps.com Editor in Chief Alan Shimel, the webinar addresses lessons learn from large-scale DevOps transformations, the business value of DevOps and some of the best tips and tricks for fighting deployment anxiety. We pulled out the top five tips and tricks from Kim and Wallgren for making deployment worries a thing of the past.

Tip 1: Automate Your Deployment

A theme throughout the webinar from both Kim and Wallgren was “automation.” One of the big reasons for this is that typically, when you automate your deployments, you’re also automating the diagnosis of problems in those deployments. You can’t always just rely on the exit code of a command-line tool to tell you whether something went right or went wrong. To help keep your deployments healthy, you’ll want to do testing along the way, possibly as a blue-green deployment or a partial deployment. However, if you don’t have automation behind that, it can be difficult to get that accomplished at speed. Without automation, you may start to rely on heroic efforts by individuals—so that every time you are doing a deployment, someone is having to do a figurative tightrope-walk to get everything accomplished. This creates a tough environment for your employees, and can lead to extensive downtime (and lost profits).

Tip 2: Enable Self-Service Deployments to Any Infrastructure

You’re probably wondering why this is a separate topic from automated deployments. That’s because the majority of deployments that occur in a given release pipeline aren’t going into production. Most deployments end up in testing environments (for performance testing, regression testing or just developer testing). It’s also important that you give your engineers the ability to use the same deployment methodology (artifacts, process definition, environment definition, etc.) regardless of the location to where they are deploying. From a cost perspective, allowing QA to test on a local Kubernetes cluster might be more cost-effective for your enterprise than running tests on Amazon Container Service. But, allowing teams to use the same approved automation as part of every deployment—to any location—means a more thoroughly rehearsed and therefore less anxiety-inducing deployment.

The ability to do self-service deployments also gives your teams the ability to move substantially faster when they need to deploy something for testing by avoiding the need to open a ticket and get your Ops teams involved. By allowing self-service deployments, you can avoid the process of running around to find the right person to accomplish the deployment you need to run, and empower your organization to react and build more autonomously across different groups. The result is better-tested and built deployments (resulting in lower deployment stress and fewer errors).

Tip 3: Self-Service Environments

We’re continuing to roll with the theme of automation, but this time we’re talking about self-service environments. More often than not, we’re working with complex distributed apps. This means things such as a containerized microservice application, DBMSs, messaging, key value stores, legacy applications and third-party vendors that we need to integrate with—all things that make our deployments complicated. Being able to consistently provision complex environments is a very important part of being able to successfully recreate a deployment as you promote the application through its life cycle.

It’s important to make sure that your engineers, and in particular, your QA and development engineers, have easy access to environments and that they’re not standing around waiting for access. When you don’t have the ability to get VMs or containers provisioned and configured correctly to receive your application, you run into problems. One of these problems is that your ability to test rapidly just became much more difficult. When you can’t get a near-production quality environment stood up in a few minutes, it means that your bug fix that requires just a few lines of code could potentially take weeks to test. Suddenly you need to requisition a set of VMs to test on (which could take weeks) and another day or two to do the deployment, and only then can you complete your tests. Things become painfully slow. And, when it comes to testing and fixing bugs, slow equals stress.

Tip 4: Artifact Repositories

Want to feel more confident about your releases? Look at your artifact repositories. Use of a shared artifact repository in production environments has been shown to be a good are a really good predictor of deployment success. If you’re not versioning your binaries, and have little insight into where binaries are coming from and how they were built, it can lead to failed deployments.

It may not be easy or simple to get your artifact repositories in line, but it’s a labor that will pay massive dividends in the long term. Electric Cloud worked with a massive online mortgage company that had the challenge of deploying large artifacts (gigabytes) into multiple data centers. To get everything up to speed, it was necessary to transfer those artifacts out of their build and test infrastructure and into their pre-production and production infrastructure. Because of the size of the artifacts and concerns about worldwide availability for the files, Electric Cloud worked with this financial services company to use Amazon S3 as the backing store for the repository server to create a long-term solution that is flexible and scalable.

Tip 5: Security and Auditability

If you work in a regulated industry, already know the heartburn auditability and governance can cause. Security is one of—if not the single—most important aspect of many application delivery pipelines. The reality is, we are just beginning to scratch the surface of the problems we will see the in the future when it comes to infrastructures getting hacked. So what’s our tip? Again, it comes back to automation. At ElectricCloud, there is a saying: Automation is auditing, automation is documentation.

The idea behind this is that by automating your deployment tasks, you are essentially creating a very real and tangible paper trail that shows what you intend to do (the automation definition) and a report of exactly what happened (the automation logs). When you rely on manual deployments, you suddenly take an aspect of repeatability and predictability out of your workflow. At that point, you are forced to report based on people’s spreadsheets, emails and memories. This is one of those things that doesn’t become a big deal until it’s a really big deal, and—to borrow another saying—an ounce of prevention is worth a pound of cure. In the case of auditability and security, being able to retrace your steps because your steps are always the same is a fairly foolproof method to making your audits much more smooth.

To provide a real-world example for how automation can make auditing and security much less painful, one of ElectricCloud’s clients is an aerospace customer with very stringent and demanding security standards. In the past, at the end of each release, it would have a two-week process where everybody had to sit down and compile a list of every single check-in deployed as part of that release. They had to understand where it got built and exactly how it was tested, and then assemble all of that information in a report. It was an extremely time-consuming, but seemingly necessary process. However, by automating many of the steps in the pipeline, the audit response effort went from a two-week process to just a few hours. They accomplished this because 99 percent of the information they needed was in the automation system and was simply scraped as part of process. The task went from manually pulling that data to just getting into it consumable formats.

Summary

By now you’ve certainly seen a theme emerge in our recommendations for reducing the stress of deployments—namely, automate! By putting effort into automation and streamlining your pipeline, you can take what was once a very labor-intensive and sometimes stomach-churning process and make it much more enjoyable. Not only will this save you time and money in the long run, but it will also allow you to focus on building better software, doing more deployments and focusing on the tasks that bring your business and your customers real value.

For more details on how you can relieve your deployment anxiety, and to hear the rest of the discussion focusing on the lessons learned from large-scale enterprise DevOps transformations and the business value of DevOps, watch the full replay.

More Stories By Anders Wallgren

Anders Wallgren is Chief Technology Officer of Electric Cloud. Anders brings with him over 25 years of in-depth experience designing and building commercial software. Prior to joining Electric Cloud, Anders held executive positions at Aceva, Archistra, and Impresse. Anders also held management positions at Macromedia (MACR), Common Ground Software and Verity (VRTY), where he played critical technical leadership roles in delivering award winning technologies such as Macromedia’s Director 7 and various Shockwave products.