DevOps Myths & Misconceptions

Common DevOps Myths and Misconceptions

“Wait, what actually is DevOps?”.

If only I had a dime for every time someone asked me this. For many, the term DevOps comes loaded with misconceptions and myths. Today, we’re going to look at some of the common myths that surround the term so that you have a better understanding of what it is. Armed with this knowledge, you’ll understand why you need it and be able to explain it clearly. And you’ll be equipped to share its ideas with colleagues or your boss.

So, What Is DevOps?

Before we go through the myths of DevOps, we’ll need to define what DevOps actually is. Put simply, DevOps is the commitment to aligning both development and operations toward a common set of goals. Usually, for a DevOps organization, that goal is to have early and continuous software delivery.

The Three Ways of DevOps

DevOps is not a role. And DevOps is not a team. But why?

We’ll get to that in just a moment. But before we explain the myths, let’s build on our definition of DevOps by looking at “the three ways” of DevOps: flow, feedback, and continual learning.

  1. Flow—This is how long it takes (and how difficult it is) for you to get your work from code commit to deployed. Flow is your metaphorical factory assembly line for your code. And achieving flow usually means investment in automation and tooling. This often looks like lots of fast-running unit tests, a smattering of integration tests, and then finally some (but only a few!) journey tests. This test setup is what is known as the testing pyramid. Additionally, flow is usually facilitated by what’s known as a pipeline.
  2. Feedback—Good flow requires good feedback. To move things through our pipeline quickly, we need to know as early as possible if the work we’re doing will cause an issue. Maybe our code introduces a bug in a different part of the codebase. Or maybe the code causes a serious performance degradation. These things happen. But if they’re going to happen, we want to know about them as early as possible. Feedback is where concepts like “shift left” come from. “Shift left” is the idea that we want to move our testing to as early in the process as possible.
  3. Continual Learning—DevOps isn’t a destination. DevOps is the constant refinement of the process toward the early delivery of software. As we add more team members, productivity should go up, not down. Continual learning comes by having good production analytics in place. In practice, this could look like conducting post-mortems following an outage. Or it could look like performing process retrospectives at periodic intervals.

The three ways are abstract, that I’ll concede. But it’s the process of converting these abstract ideas into concepts and tools that have created confusion en mass throughout the industry.

So, without further ado, let’s do some myth busting!

Myth 1: DevOps Is a Role

As we covered in the introduction, DevOps is the commitment to collaboration across our development and operations. Based on this definition, it’s fundamentally impossible for DevOps to be a role. We can champion DevOps and we can even teach DevOps practices, but we can’t be DevOps.

Simply hiring people into a position called “DevOps” doesn’t strictly ensure we practice DevOps. Given the wrong organizational constraints, setup, and working practice, your newly hired “DevOps” person will quickly start to look like a traditional operations team member that has conflicting goals with development. A wolf in sheep’s clothing! DevOps is something you do, not something you are.

DevOps is not a role.

Myth 2: DevOps Is Tooling

For me, this is easily the most frustrating myth.

If you’ve ever opened up the AWS console, you know what it feels like to be overwhelmed by tooling. I’ve worked on cloud software for years, and I still find myself thinking, “Why are there 400 AWS services? What do all of these mean?” If tooling is often abhorrent for me, it’s definitely hard for non-technical people.

Why do I find this myth so frustrating? Well, not only is describing DevOps through tooling incorrect; it’s also the fastest way to put a non-technical stakeholder to sleep. And if we care at all about implementing DevOps ideas into our work, we desperately need to be able to communicate with these non-technical people on their terms and in their language. Defining DevOps by cryptic-sounding tooling creates barriers for our communication.

Tools are what we use to implement DevOps. We have infrastructure-as-code tools that help us spin up new virtual machines in the cloud, and we have testing tools to check the speed of our apps. The list goes on. Ever heard the phrase “all the gear and no idea”? Defining DevOps by tooling is to do precisely this. Owning lots of hammers doesn’t make you a DIY expert—fixing lots of things makes you a DIY expert! DevOps companies use tooling, but…

DevOps is not tooling.

Myth 3: DevOps Doesn’t Work in Regulated Industries

DevOps comes with a lot of scary, often implausible sounding practices. When I tell people that I much prefer trunk-based development to branch models, they usually recoil in disgust. “You do what?” they exclaim, acting as if I just popped them square in the jaw. “Everyone pushes changes to master every day? Are you crazy?” they say.

No, I’m definitely not. The proof is in the pudding. When you have a solid testing and deployment pipeline that catches defects well, having every developer commit to the same branch every single day makes a lot of sense. Don’t believe me? Google does it with thousands of engineers.

Many believe that these more radical approaches don’t work in a regulated environment or in scaled environments, like finance. But the evidence is abundantly clear. Applications that are built with agility in mind (meaning it’s easy and fast to make changes) are less risky than their infrequently delivered counterparts.

Yes, it might feel safer to have security checkpoints and to have someone rifle through 100,000 lines of code written over six months. But security checkpoints are little more than theater. They make us feel safe without really making things that much safer. What does reduce security risk is automating your testing process, making small changes, putting them in production frequently, and applying liberal monitoring and observability.

DevOps works in every environment.

Myth 4: DevOps Replaces Ops

Implementing DevOps doesn’t mean you need to go fire your system admins and operations staff. In fact, on the contrary, you need their knowledge. Knowing absolutely everything about development and operations is almost impossible. So, you’ll need people who have different specialties and interests.

Rather than fire our operations teams, we need to make sure their goals are aligned with the development teams’ goals. Everyone simultaneously should be driving toward faster delivery of high-quality software. A good waiter has tasted the food on the menu, but all waiters don’t need to be chefs.

DevOps doesn’t mean removing Ops.

Wrapping Things Up

So, there you have it. The top four myths about DevOps—busted. Hopefully, this clears things up a little and you now know what DevOps is and isn’t. It’s principally a set of beliefs and practices first, with tooling, roles, and teams being secondary.

Every company can and should incorporate ideas of DevOps into their business. It will lead to happier engineers and happier customers.

This post was written by Lou Bichard. Lou is a JavaScript full stack engineer with a passion for culture, approach, and delivery. He believes the best products emerge from high performing teams and practices. Lou is a fan and advocate of old-school lean and systems thinking, XP, continuous delivery, and DevOps.

The Cat and the Map

Why Map Your IT Environments?

“Would you tell me, please, which way I ought to go from here?”“That depends a good deal on where you want to get to,” said the Cat.“I don’t much care where” said Alice.“Then it doesn’t matter which way you go,” said the Cat.“so long as I get somewhere,” Alice added as an explanation.“Oh, you’re sure to do that,” said the Cat, “if you only walk long enough.”  — Lewis Carroll, Alice’s Adventures in Wonderland

Preamble

Running a high-functioning IT team or tech company requires you to be clear in your mind where you want to take your team. If you’re not clear about that, then just like Alice in the quote above, it doesn’t matter which way you go—or, in the context of the increasingly complex tech ecosystem, it doesn’t matter which methodology or tools you adopt. Then you end up implementing this technology or that methodology halfheartedly, which leads to you switching to new technology and methodology, and the cycle repeats. This leads to a form of techno-methodology whiplash for your team. Is that what you want for your team? I hope not.

Know Your Destination, Know Your Landscape

What the Cheshire Cat didn’t point out is that for most of us dealing with complex situations, knowing the destination isn’t enough. We need to know the landscape to plot our way to success. In this article, I will cover the top four reasons why you need to properly map your IT and test environment to bring your team to perform at a high-functioning level.

One View to See It All

When you map your IT and test environment, you essentially establish the landscape of the situation. A good map lets you bring together various priorities and interests of your team and organization in a single view. The benefits of doing so can’t be underestimated. Miller’s law states that the average human mind can hold only about seven things at any one time. Without a map to oversee the entire landscape, how could you possibly navigate your team around risks of deployment, development, and the day-to-day running of the IT and test environments?

In addition, you can build a map that contains multiple levels. Imagine that at the organization overview you map out the various key structures, such as business, ops, IT environment, and test environment. Then you can drill in further by adding in the substructures, such as system instances, applications, data, and infrastructure. All these structures and substructures will interact among themselves, which is why you need to add in the relationships among these structures, the projects, and the teams in your organization.

Now imagine you have this map right now. Wouldn’t that make it a lot easier to think about your decisions and weigh your options? You can almost literally trace how a possible solution would impact which system and which team—so before you even encounter objections, you can anticipate them. That’s the power of a single view of your landscape captured in a map.

Spotting Existing Gaps and New Opportunities

When you have a map, the map almost immediately shows you some low-hanging fruits to pick. Existing gaps and opportunities to improve your existing operations show themselves easily. These low-hanging fruits can give you some quick wins for you and your organization.

Some typical quick wins would be:

  1. Identify waste and save costs. For example, you may identify system instances being maintained but not used.
  2. Identify underutilized resources and consolidate them. This happens quite frequently as well. For example, you have a bunch of system instances that constantly have low utilization. You can decide to consolidate them to bring about a better return on your expenditure on these resources.
  3. Identify undersized systems or applications and reallocate buffer resources. Once you reduce waste and free up resources targeting underutilized resources, you can deploy some of these freed-up resources at the undersized systems. Typically, people would complain that these undersized systems were constantly stretched and not enough resources could be spared due to budget. In other words, you can help reallocate your resources better simply by having this map.
  4. Identify the high-growth areas and enable them to grow faster. With a map, you can view how certain systems or applications are growing quickly because they are driven by fast-growing demand. When you can link these high-growth areas with how they help with organization, you will be able to convince management how adding more budget makes business sense. Or you can redeploy resources from other structures facing slowing growth. In either case, a map bolsters the strength of your decision.

Streamline and Simplify Processes

Everyone has a story about dealing with silly, ridiculous bureaucratic processes. However, as a civilization, progress means more processes are needed for things to run smoothly. Running your IT and test environment successfully means having good processes to ensure things run smoothly. Think Value Stream Mapping.The key is to know when these processes become less effective or even outright unnecessary. Then you need to retire or remodel your processes. The key, therefore, is to discover these increasingly ineffective processes and nip them in the bud.

So, study the stats from your troubleshooting and logs and add those to your map. Talk to your various teams from business and customer support. Add anecdotes in as well. In a single view, you would be able to allow both data and personal stories to drive your decision on how to simplify running your IT and test environments. Streamlining and pruning away processes that used to be (but are no longer) necessary would release more resources back to your budget. This kick-starts a virtuous cycle as freed-up resources can then be redeployed for growing opportunities.

Better Impact Analysis and Scenario Planning

Once you take advantage of the single view to quickly exploit new opportunities, uncover waste, increase better utilization of resources via reallocation, and streamline processes, you have established credibility about mapping. Imagine earning all that success without even using the methodological or technological fad of the day.

Now it’s time for the exciting stuff—planning the future. Once again, the mapping will help greatly. You can plan several scenarios and strategies in a playbook and then check them against the map. The check would involve some kind of impact analysis. The scenario planning exercise is widely used by some of the top-performing organizations in the world. Having a map of your IT and test environment improves the effectiveness and efficiency of the exercise. No more guessing about potential impact of brainstormed strategies for future scenarios; you can immediately check and verify obvious drawbacks and benefits. Scenario planning is better because impact analysis becomes better with a map of your environments.

Conclusion

In Enterprise IT intelligence, “environment mapping” represents a highly beneficial and foundational exercise all IT teams and tech companies should perform at least once every quarter or so. It provides high visibility to the many interrelated structures and their relations in your organization. It is not easy to discern these structures and their relations without the map. The increase in visibility delivers great benefits. Agility, smooth delivery, greater collaboration, and good operational and business decision-making all flow from the greater visibility of the landscape surrounding your team and organization. Buy-in becomes simpler when everybody can be on the same page—and when everybody is looking at the same map as well.

The importance of mapping your environments is key to your organization’s success. Bear in mind that maps are imperfect, but they are still very useful. Mapping helps you and your team become better at your jobs simply because you did the exercise of mapping. The exercise surfaces the differences in the thinking between the members in your team. Therefore, don’t wait until you come up with the perfect map. Your team automatically becomes better with more practice mapping. Your team and your organization will thank you for that when they start to see the uptick in results.

Author: TJ Simmons

This post was written by TJ Simmons. Kim Sia writes under the nom de plume T.J. Simmons. He started his own developer firm five years ago, building solutions for professionals in telecoms and the finance industry who were overwhelmed by too many Excel spreadsheets. He’s now proficient with the automation of document generation and data extraction from varied sources.

Just Enough ITSM

Just Enough ITSM (or ITSM for Non-Production)

Preamble

We’ve all experienced the frustration that comes from too much or too little service management in your test environment. Lately, the DevOps engineer in me has been thinking about how we end up in one of those states. How can we get just enough service management in non-production environments?

Production environments require more service than non-prod environments. But we shouldn’t throw the baby out with the bathwater when it comes to service management in non-prod. I’m a software developer who practices DevOps, so I do a lot of work involving operations, deployment, and automation. I interface with many groups to achieve a good workflow within the organization.

Operations and development often have contradictory goals. Fortunately, we can all find common ground by working together. Understanding each other’s needs and goals through communication is the key to success!

But before we get into that, let’s explore the world of IT service management (ITSM) for a bit. In this post, I’ll discuss different levels of service management in non-prod environments and borrow some fundamental DevOps principles that can help you get the right amount of ITSM. Let’s start with an overview of non-production environments.

What Are Non-Production Environments?

We use non-production environments for development, testing, and demonstrations. It’s best to keep them as independent as possible to avoid any crosstalk. We wouldn’t want issues in one environment to affect any of the others.

These environments’ users are often internal—for the most part, we’re talking about developers, testers, and stakeholders. It’s safe to assume that anyone in the company is a potential user. It’s also safe to assume that anyone providing a service to the company might have access to non-production environments. But there could also be external users accessing these environments, perhaps for testing purposes.

Unless you have the environment in question tightly controlled, you may not know who those users are. That’s a big problem. It’s important to understand who’s using which environments in case someone inadvertently has access to unauthorized information. Or maybe you just need to know who needs to stay informed about changes or outages in a specific environment.

That’s where service management comes in. The next section explains how bad things can be when there is no service management in non-production. This exercise should be fun…or it might make you queasy. Better have a seat and buckle up just in case!

When You Have Zero Service Management in Non-Prod

Let’s call this the state of anarchy. Here’s what it looks like:

  • Servers are running haywire and no one knows it.
  • Patches are missing.
  • Security holes abound!
  • The network is barely serviceable.

Can anyone even use this environment? How did it get like this, anyway? I have a couple of theories…

  1. Evolutionary Chaos: This model was chaos from the start. Someone set up an environment for testing an app a long time ago. It did its job and was later repurposed. Then, it got repurposed again. And again. Eventually, it started to grow hair. Then an arm sprouted out of its back. Then it grew an extra leg. Suddenly, it began to “self-organize.” Now it seems to have a mind of its own. It grew out of chaos!
  2. Entropic Chaos: Entropy is always at play. It takes work to keep it from causing decay. In this theory, things were great in the past. But over time, service management became less and less of a priority for this environment. Entropy won the day, and the situation degraded into chaos.

However the environment got into its current chaotic state, the outcomes are the same. Issues are resolved slowly (if at all). Time is wasted digging up information or piecing it together. Data becomes lost, corrupted, and insecure. Owning chaos is a burden and a huge risk in many respects. We don’t want to end up here!

If you’ve made it this far and still have your lunch in tow, you’re past the worst of it. You can uncover your eyes, but be wary! Next, we’re going to look at a wholly buckled down environment and how it can go wrong in other ways.

When You Have Too Much ITSM in Non-Prod

It’s better to have too much service management than not enough. But it’s still not ideal. For one thing, it’s wasteful. For another, it causes morale to suffer. Granted, it’s reasonable to default to production-level service management at first. But staying on default is a symptom of a big problem—communications breakdown. And the root cause of having too much ITSM is due in part to human nature and in part to organizational legacy.

Here are my two theories on how organizations end up here:

  1. Single-Moded Process: Service delivery, operations, and all other departments focused on service management are hell-bent on making sure the customer is absolutely satisfied with their service. Going the extra mile to make sure the customer is happy is a good thing! Operations folks are trained on production-level service management, so their priority is to keep the trains running. With this in mind, operations management systems are set up for production environments. It’s easiest to use that same default everywhere. For better or worse, every environment is treated like a production environment!
  2. Fractured Organization: Organizations are sub-divided into functional groups. When these groups aren’t aligned to a shared purpose, they’ll align to their own purposes. They even end up competing with each other. They’ll center up on their own aims, tossing aside the needs of others.

How You Know When There’s a Problem

The fractured organization theory may explain what happened to a friend of mine recently. Let’s call him Fabian.

Fabian was the on-call engineer this past June. The overnight support team woke him up several nights in a row for irrelevant issues in the development environment. He brought this up to operations, who were responsible for managing the alert system. Unfortunately, the ops engineer was not sympathetic to his concerns in the slightest. Instead, the ops guy put it upon Fabian to tell him what the alert system should do. That’s understandable, but Fabian had no information to that aim. The ops guy wouldn’t share anything with Fabian or collaborate with him on putting a plan together.

This story illustrates a misalignment between operations and development. Problems like this crop up all over the place. Usually, we can remedy or even avoid these situations by taking just a bit more time to understand the other side.

The four theories I’ve presented tell us about extremes. And yes, these extremes push the boundaries and aren’t likely to occur. Still, an organization sitting somewhere in the middle may not have the right service management in non-production. As we’ve seen with Fabian’s story, this is often an issue of misaligned goals.

So how do we get to just enough service management? Maybe the answers lay in what’s working so well for DevOps! Let’s see how.

Just Enough Service Management

IT teams have members with specialties suited to their functional area. Operations folks keep the wheels turning. QA makes sure the applications behave as promised. There are several other specialties—networking, security, and development are just a few examples. Ideally, all of these teams interact and work together toward a well-functioning IT department. But it doesn’t just happen. It takes some key ingredients.

Leadership

Working together effectively takes good leadership. Leadership happens at all levels in an organization. Remember, a leader is a person, not a role.

Shared Vision

It’s also critical to have a shared vision and shared goals. Creating a shared vision is part of being a leader. Here are a few points to remember about vision:

  • A shared vision creates alignment.
  • The vision should be exciting to everyone.
  • You have to do some selling to get everyone aligned with the vision.

Your vision for the test environment could be something like: “Our test environment will be a well-oiled machine.” Use metaphors like “Smooth Operators” or “Pit Crew” to convey the right modes of thinking.

Open Communications

Keep communications open and honest. Open, honest communications can be one of the most significant challenges you’ll face in implementing the right amount of service management. Many of us have a hard time being honest for fear of looking weak in the eyes of others. That fear is difficult to overcome, especially in an environment where we don’t feel safe and secure. Managers have the vital task of creating an environment where employees feel safe and able to communicate openly. Trust is essential to success.

One Last Look

Getting the wrong amount of service management in any environment is a problem. Too little opens up all kinds of risks. Too much ITSM results in wasted time and resources. In this post, I presented four theories for how an organization might end up with the wrong amount of service management in non-prod and discussed what changes you can make to correct that.

ITSM doesn’t happen in a bubble. It takes alignment between many stakeholders. There are three main things we can do to get alignment: wear your leader hat, share the vision, and converse honestly. You can accomplish any goal when you’re set up to win—even with something as challenging as achieving just enough service management.

Author: Phil Vuollet

This post was written by Phil. Phil Vuollet uses software to automate process to improve efficiency and repeatability.