TEM-10-Essential-Best-Practices

Test Environment Management 10 Essential Practices

Introduction

A test environment is a setup for the testing team where they execute test cases. This environment comprises software, hardware, and network configuration. The setup of a test environment depends on the application under test. A complete setup helps testers carry out their tasks without any system side hurdles. Finally, the setup helps improve the quality of the final product.

 

TEM-10-Essential-Best-Practices

In this post, we’ll get to know why managing your test environment is important. After that, we’ll discuss 10 best practices for test environment management. By following these best practices, the testing team of your company can efficiently manage test data in a way that the data can be reused. The best practices will also enable your team to complete their task by following data privacy regulations and to ensure client satisfaction. So, let’s get started.

Importance of Test Environment Management

As technology evolves, requirements keep changing. For instance, with Angular dominating the UI domain, the demand for single-page applications has increased a lot. Cost, time, and quality are the most important factors to check for every business. Every firm aims for the appropriate budget and ample time before starting a project. But somehow, these two entities face the most shortage. Well, we don’t live in an ideal world, do we? Sometimes, due to time and budget constraints, the quality of the end product declines.

But budget and time shortages don’t mean that you should compromise on the testing phase. Software testing is a tricky process with the involvement of several dependencies.

Testing is a crucial activity of the software development life cycle (SDLC) and can determine a product’s fate. Therefore, the test environment has to be reliable. Do you want to disappoint customers with a product that has many critical bugs because of improper testing? No matter whether you’re a start-up or an established company, never overlook the importance of testing. For getting the highest accuracy in test results, your team needs proper test environment management.

If a team doesn’t give importance to test environment management, it results in poor handling of assets. This includes time and budget. When a company can’t handle these in the right way, quality suffers. Thus, to maintain a high quality of products and services offered, it’s essential to manage the test environment. Before getting on to the best practices, take a look at these metrics, which will help you to measure and improve your test environment.

10 Best Practices for Test Environment Management

Now that we know why managing a test environment is important, let’s get started with the 10 best practices for test environment management.

1. Begin Testing Exercise at an Early Stage in the SDLC

Even though most firms know the importance of testing early, very few successfully implement it. When teams don’t test early, it leads to bugs at a later stage. Fixing them requires more time, effort, and money. As a result, it disrupts the management of the test environment. When the development team has composed even a few lines of code, testing exercises should start. The team should also follow the shift-left approach. This involves performing testing earlier in the product’s life cycle. The process results in fewer bugs to fix in the end. Hence, it saves time and cuts down costs.

2. Demand Awareness and Management of Knowledge

When customers make a demand, a company must develop a product in a way that satisfies that demand. When team members keep client needs in mind during development, the outcomes are close to what the client expects. Thus, it’s important to use a test environment management strategy according to customer needs. Testers writing a test case should develop a knowledge base according to demands. The business analyst also needs to keep updated documents that contain the current as well as changed requirements. In this way, if there is a case of updating the test environment, it keeps other team members in line with what’s going on.

3. Conduct Iterative Tests

Most companies are adopting agile as part of their framework. Agile follows a sprint-based approach. It also involves testing in iterations. That means the entire product is divided into small phases. Each phase has its development and testing cycle. The entire process reveals bugs early, which makes fixing them easier. Iterative tests increase the flexibility of the SDLC. The client can change the scope in case the need arises without it being a burden to the budget. Since the team handles bugs at every sprint, there doesn’t end up being an overload of them at the end of the project. Thus, managing risks becomes easier.

4. Plan and Coordinate

Planning is very important while managing the test environment. Testing and development teams often don’t have separate test assets. So, test environment managers should plan schedules for both teams. They should ensure proper coordination to avoid conflicts. Sometimes, shared usage of resources can give rise to certain conflicts. For instance, if there are few iOS systems in your team to develop and test iOS apps, conflict may arise regarding which team will use the systems and when. Planning and coordination is a must to maintain transparency among teams and team members. Apart from that, proper communication with clients is important to keep them updated on their requirements. Check out this use case, which will help you to effectively plan and use your resources.

5. Reuse the Test Resources and Test Cases

Reusing test resources helps save money for a company. It frees up the firm of the need to tap new resources every time a new project begins. Even though every application is unique, many have some generic areas. That’s where the option of reusing test cases increases. Reusing test cases reduces redundancy. It eliminates the need for writing a different script each time you’re testing new features. For instance, all e-commerce stores have a shopping cart. Thus, testers can use the script for testing the “add to cart” feature of another app. It won’t matter if they’ve already used it before since the feature is the same.

6. Implement Standardization and Automation

It’s important for testers to analyze the validity of tests. But this requires a benchmark. Defining test environment standards makes it possible to set up a benchmark for running the test cases. After setting these standards, it’s time to automate. Some things that can use automation include deployment, build, and shakedown. Automation can save time, resources, and manual efforts that can be put to better use later. Configuration management becomes a lot easier when the dependency on manual testers lessens. Automated TEM tools reduce the number of test environments in a test bed. As a result, it improves test environment provisioning time. Besides this, the costs incurred are lower.

7. Use Testing Techniques According to Needs

I’m going to cite a situation that you must have come across many times. There are times when something seems impossible at first. But if you break it down into chunks, it doesn’t seem overwhelming. Taking it one step at a time makes things simple. In most cases, with this approach, you succeed. Similarly, for test environment management, first, analyze the test structure. Then break down massive loads of tasks into manageable pieces. After that, understand the steps and the needs for performing each. Figure out the test endeavors and take the necessary steps. According to the need, pick out the testing techniques and implement them. For example, you can use containers to improve your system’s security and agility.

8. Mask and Encrypt Test Data

With advancement in technology, cyberthreats have increased. Endpoint devices are usually the starting point of the majority of data breaches. Not only are they a threat to users, but they also pose great hazards to companies as well. So, companies should mask and encrypt user data. Not only that, every company should avoid using real customer data during testing. Firms should ensure data compliance with PII or GDPR standards. Some processes to ensure data compliance are ETL automation, service virtualization, and data fabrication.

9. Implement Processes According to Stakeholder Requirements and the Company’s Culture

Stakeholders are the most important component determining the success of a business. They’re the ones giving the requirements. The entire team has to work according to their needs. But it’s important that their needs are in line with the company’s culture. Sometimes companies don’t have the means to ensure the fulfillment of customer requirements. This results in an unsatisfied client, which can be fatal for a company. The testing team should have pre-configured assets before they start testing. A client doesn’t forgive any unresolved bug in the later stages. For instance, if an e-commerce app in production charges the customer twice for a transaction, it can create chaos. As a result, the reputation of the company can suffer. You can take a look at this blog to analyze and refine your company’s current capabilities.

10. Convey the Right Status of the Task

Legitimate and correct correspondence is a must to ensure a smooth flow of work. If the conveying of information goes wrong, it can cost a firm its reputation. The objective of a project should be clear to all in the beginning. Team members should share the task status with the right group of people. The timing of conveying information is also important for a fruitful task.

Suppose you need a specific set of data for executing a test case. Whenever you’re stuck with that test case, convey the blockage-related information with the concerned person. Don’t just inform your QA lead. Inform the scrum master or your QA manager as well. They’ll take care of the issue so that you can smoothly carry out your task. If you hesitate regarding whom to ask, a delay in testing will occur. Before the project starts, the entire team should have clarity about whom to contact in case of emergencies or sharing daily task statuses

What Drives Appropriate Test Environment Management?

The processes for end-to-end testing should be transparent for managing your test environment. The key factors driving smooth management include the following:

  1. Resource management: Use a resource properly and assign the right task to the right person.
  2. Efficient planning: Plan a successful test cycle at each sprint that results in a bug-free end product.
  3. Process optimization: Adjust the entire test process in a way that the resources give their best output.
  4. Test automation: Automate every repetitive task that seems to waste manual labor.

Software testing is tricky. To achieve high accuracy, setting up a test environment close to a real-life scenario is important. To set up such an environment, proper planning and management are musts. Scenarios change and test environments evolve. Thus, a test environment management strategy is vital for firms. A combination of the above practices increases productivity. At the same time, test environment management practices also reduce costs and accelerate releases

Author: Arnab Roy Chowdhury

This post was written by Arnab Roy Chowdhury. Arnab is a UI developer by profession and a blogging enthusiast. He has strong expertise in the latest UI/UX trends, project methodologies, testing, and scripting.

seven-metrics

7 Metrics for Configuration Management

Years ago, a company might have released a software suite and then proverbially kicked back in a chair with its feet on a desk basking in celebration. Suffice it to say that the software world moves much faster today. It seems as though there are some companies that push out new updates every few days. And thanks to microservices architecture and the DevOps mindset, there are many companies that are constantly updating their software or at least some feature in it. Pumping out release after release isn’t easy. With so many moving parts and so much riding on each new update, companies need to do everything within their power to ensure that releases are well-received by users. That starts with getting their development house in order through a process known as configuration management.
seven-metrics

 

What Is Configuration Management?

Configuration management is the process in which organizations and development teams oversee new software updates to ensure they are working as designed when bugs are fixed, new features are introduced and old features are decommissioned.

Thanks to configuration management, organizations can gain full visibility into the development lifecycle and easily identify errors that may need to be fixed.

If you’re thinking about implementing configuration management at your organization, that’s great news. But like anything else, you can’t just expect configuration management to solve all of your problems on its own. You need the right approach.

With that in mind, let’s take a look at seven different configuration management metrics you can track to increase the chances that your initiatives help you achieve results. Keep track of these metrics and work hard to improve them over time, and you’ll build better applications that are better received by your users.

1. Frequency of Updates

Some companies are perfectly fine with shipping updates once a quarter or even once a year. Other companies pride themselves on pumping out new updates every month, and some might aim to release even more new software packages than that.

Every software company has unique goals. It might not matter how regularly your software is updated, but it might matter how consistently it is. Your users will expect at least some rhyme and reason to the number of updates you pump out.

Keeping track of the frequency of updates metric will help you make sure you are meeting your company’s goals and satisfying customer expectations. If you’re not shipping releases as frequently as you’d like, you might want to drill deeper and find out why.

2. Release Downtime Metrics

We all know how applications should work. When they don’t work as designed, we’re unable to get things done quickly. Depending on how bad the problem gets, it’s easy to get frustrated to the point a user starts thinking about whether they should find a substitute solution.

End users depend on your software. For a business user, that might mean a platform they use to store information and communicate with colleagues. It might mean a place they store code for a developer. And for a regular customer it might be a social network they use every day to meet new people.

Whatever the case may be, the moment you are unable to meet user expectations might be the moment your users begin an exodus.

Worse than that, downtime can be prohibitively expensive. In fact, a recent Gartner report found that downtime can cost as much as $540,000 per hour.

Keeping track of how much downtime you incur (if any) while a new update is released can help you maintain positive and productive user experiences. In the event there is downtime during a new release, you can quickly identify what happened and take steps to reduce the chances it happens again.

Add it all up, and keeping tabs of this metric can help you provide better experiences while increasing profitability.

3. Average Number of Errors

In a perfect world, your developers would write flawless code every day, and each new release would ship with perfect code. But we live in the real world where people do make mistakes.

Of course, it’s in your best interest to work as hard as you can to keep those mistakes down to a minimum. By keeping track of the average number of errors in each new software release, you can identify areas in your workflow that could be improved. This may help you catch mistakes earlier in the process.

For example, you might realize that a new adding a new tool to your DevOps team’s arsenal can help your release smoother updates every time.

At the very least, tracking this metric provides an easy mechanism to determine whether your team is trending in the right direction, i.e., making fewer errors as time goes on.

4. Code Lines Per Update

The point of writing is to convey a point to your readers. Unless the author is getting paid per-word, writers should state their case in as little words as possible. The question is what day is it today? It’s not do you have any idea as to which 24-hour period we are currently in the middle of?

In the world of software development, the same maxim holds true. You don’t need 100 lines of code when a single line will do the same trick.

Keeping track of code lines per update can help you ensure that you are writing software efficiently. Depending on what your team’s workflows are like, you may be able to identify individual developers who are writing too many lines of code and have the more efficient coders give them a few pointers.

5. Rework Metrics

How many files does your team rework each month?

Developers don’t come cheap. The last thing you want to do is pay them to do the same work over and over again—whether that’s because someone did it incorrectly in the first place or because your team is struggling to communicate effectively.

Tracking rework metrics can help you make sure that the percent of rework your team does each month doesn’t increase in perpetuity. On the flipside, you may also be able to identify what you are doing that is decreasing rework. With that information on hand, you may be able to bake additional efficiencies into your development processes.

6. Frequently Changing Files

Track this metric to determine whether certain files are changing too frequently. If you find out that certain files are changing with each update, you may need to look into the issue a bit.

For example, you can determine why certain files are changing so often. Maybe it’s because developers aren’t sure of the requirements. Maybe it’s because there’s an issue with your testing and QA approach.

Whatever the case may be, this metric can help you add additional efficiencies into your development processes by reducing or eliminating duplicative work and rewriting inefficient code blocks as needed.

7. Root Causes for Late Delivery

As you optimize your release management workflows, everything should get more and more predictable.

Yet nobody can predict the future and nobody’s perfect. So things will invariably not go according to plan every now and again.

Configuration management lets you drill down into the root causes for late delivery.

Fingers crossed that you never run into any errors that slow down your releases. But in the event you do miss some deadlines, you may be able to start detecting a pattern as to why you are unable to meet them.

Armed with that information, you can begin working backward to identify what is causing delays and what you need to do to prevent that from happening in the future.

Are You Ready to Start Using Configuration Management?

Is your development team reaching its full potential and doing its best work? If not, it may be time to get started with configuration management. That way, you’ll be able to delight customers by meeting their expectations while avoiding downtime and increasing profitability.

And the best part? With the right tools in place, configuration management can largely be automated.

To learn more about how your DevOps team can integrate configuration management into their workflows to build better software more efficiently, take a look at Enov8.

Author Justin Reynolds

This post was written by Justin Reynolds. Justin is a freelance writer who enjoys telling stories about how technology, science, and creativity can help workers be more productive. In his spare time, he likes seeing or playing live music, hiking, and traveling.

ITIL4-Whats-Changed

ITIL 4.0: What Has Changed?

It’s hard to imagine a world that existed without technology. Yet it wasn’t so long ago when things like computers and the internet were brand-new and seemingly futuristic concepts. As computing infrastructure became increasingly widespread in the 1980s, the government of the United Kingdom issued a set of recommended standards that IT teams should follow because it realized that, at the time, everyone was just doing their own thing.

Shortly thereafter, the first iteration of Information Technology Infrastructure Library (ITIL) emerged, called the Government Information Technology Infrastructure Management (GITIM). These guidelines outlined a set of practices, processes, and policies organizations could follow to ensure their IT infrastructure was set up in such a way to support their business needs. The ITIL standards were inspired by the process-based management teachings of productivity and management guru W. Edwards Deming.

ITIL4-Whats-Changed

Over the years, we’ve seen many iterations of the ITIL. The most recent version of the standards—ITIL 4—was released in February 2019. In large part, this iteration was influenced by the agile approach to software development and the rise of DevOps teams—both of which have largely transformed the way we think about technology. 

 

Keep reading this post to learn more about:

  • What ITIL is?
  • The pros and cons of ITIL?
  • How ITIL has changed over time?
  • How, specifically, the rise of agile workflows and DevOps teams impacted ITIL 4?

What Is ITIL?

Life would be difficult if it were impossible to learn from other people and we had to figure everything out by ourselves. Good thing that’s not the case.

At a very basic level, ITIL is a framework that outlines the best practice for delivering IT services throughout the entire lifecycle. Organizations that follow this framework put themselves in a great position to stay on the cutting edge of technology and leverage the latest tools and philosophies that drive leading innovators forward today. They are also able to respond to incidents faster and enact change management initiatives with more success.

At a high level, there are five core components of ITIL 4:

  1. Service value chain.
  2. Practices.
  3. Guiding principles.
  4. Governance.
  5. Continual improvement.

Now that we’ve got our definitions locked down, let’s shift our attention to the pros and cons of enacting ITIL at your organization.

What Are the Pros of ITIL? 

ITIL is popular for good reason. The framework helps organizations big and small optimize their IT infrastructure. It also helps them secure their networks and realize productivity gains.

More specifically, ITIL enables organizations to:

  • Keep IT aligned with business needs, ensuring that the right infrastructure is in place for the task at hand. For example, a team that has a mobile workforce should leverage cloud platforms that enable employees to work productively from any connected device.
  • Delight customers and strengthen user experiences by improving the delivery of IT services and maintaining a network and infrastructure that works as designed and meets modern expectations.
  • Reduce IT costs and eliminate unnecessary expenditures by ensuring that IT infrastructure is optimized and efficient. For example, if you’re storing petabytes of duplicative data for no reason, best practices would tell you that you need to do a lot of culling to save on storage costs.
  • Gain more visibility into IT expenses and infrastructure to better understand your network and detect inefficiencies that can be improved. For example, if your software development team has recently started using containers to build applications, you might not need to run as many virtual machines anymore, which drain more computing resources.
  • Increase uptime and availability due to increased resiliency and robust disaster recovery and business continuity plans. This is a big deal because downtime can be prohibitively expensive, depending on the scale of your organization. Just ask Amazon.
  • Future-proof tech infrastructure to support agile workflows and adaptability in an era where customer needs shift overnight and competitors are always just a few taps of a smartphone away.

What Are the Cons of ITIL? 

But like everything else, ITIL by itself is not a panacea. You can’t just hire some consultant who will preach the virtues of ITIL and expect to transform your IT operations overnight. 

While the benefits of the framework speak for themselves, you need to be realistic about shifting to a new approach to IT management. However, with the right approach—which includes training, patience, and reasonable expectations—your organization stands to benefit significantly by adopting ITIL.

How Has ITIL Changed Over the Years?

ITIL initially emerged because more and more organizations were using new technologies but nobody really knew how to manage them effectively. Companies were largely using technology because they could—not because they were making strategic investments to support their customers and business needs. The initial iteration of ITIL found that most companies had the same requirements and needs for their IT networks, regardless of size or industry.

At the turn of the millennium, the second iteration of ITIL came online. In large part, this version consolidated and simplified the teachings and documentation from the inaugural ITIL framework.

In May 2007, ITIL 3 came to the surface. This third iteration included a set of five reference books called Service Strategy, Service Design, Service Transition, Service Operation, and Continual Service Improvement. ITIL 3 picked up where ITIL 2 left off, further consolidating the framework to make it easier for organizations to implement.

Four years later, ITIL 3 was revised once more, primarily to maintain consistency as technology evolved.

Introducing ITIL 4

Fast forward to 2019, and the most recent version, ITIL 4, is where we’re at today. Quite simply, ITIL 4 was issued to align the standards with the agile and DevOps workflows that have grown to dominate technology teams over the last several years. ITIL 4 includes two core components: the four dimensions model and the service value system. 

At a high level, ITIL 4 represents more of a change in approach and philosophy than a change in content. Just as software teams adopt agile and DevOps workflows, IT must adopt a similar mindset if they wish to keep pace and support accelerated innovation. At the end of the day, IT is a cornerstone of the success of the modern organization. It’s imperative that IT support the new way of working if an organization wishes to reach its full potential.

How Have Agile and DevOps Impacted ITIL 4?

In the past, software teams would build monolithic applications and release maybe once a year. Today’s leading software development teams have embraced agile development and DevOps workflows. Slowly but surely, monthly releases are becoming the norm. Development is becoming more collaborative, too, with both colleagues and users steering the product roadmap.

ITIL 4 recognizes and supports this new way of working with new core messages:

  • Focus on value.
  • Start where you are.
  • Progress iteratively with feedback.
  • Collaborate and promote visibility.
  • Think and work holistically.
  • Keep it simple and practical.
  • Optimize and automate.

Where Does Your Organization Stand?

If your company hasn’t yet implemented ITIL, what are you waiting for?

Whether you’re a startup or your organization has been around forever, ITIL serves as a guiding framework. Follow it and it enables you to protect your networks, support your developers, and delight your customers. 

And what exactly is the alternative, anyway? Running your IT department like the Wild West?

With so much on the line, you can’t afford that risk. So become an ITIL-driven organization. That way, you’ll get the peace of mind that comes with knowing your networks and infrastructure are secure and support innovation and agility. 

What’s not to like?

Author Justin Reynolds

This post was written by Justin Reynolds. Justin is a freelance writer who enjoys telling stories about how technology, science, and creativity can help workers be more productive. In his spare time, he likes seeing or playing live music, hiking, and traveling.

Software-Testing-Anti-Patterns

Software Testing Anti Patterns

Since the dawn of computers, we’ve always had to test software. Over the course of several decades, the discipline of software testing has seen many best practices and patterns. Unfortunately, there are also several anti patterns that are present in many companies.

An anti pattern is a pattern of activities that tries to solve a certain problem but is actually counter-productive. It either doesn’t solve the problem, makes it worse, or creates new problems. In this article, I’ll sum up some common testing anti patterns.

Software-Testing-Anti-Patterns
 

Only Involving Testers Afterwards

Many companies only involve the testers when the developers decide a feature is done. The requirements go to the developers, who change the code to implement the requested feature. The updated application is then “thrown over the wall” to the testers. They will then use the requirements to construct test cases. After going through the test cases, the testers will often find all sorts of issues so that the developers need to revisit the new features. This has a detrimental effect on productivity and morale. Such an approach to testing is used in many companies, even those that talk about modern practices like Agile and DevOps. However, “throwing things over the wall” without input from the next step goes against the spirit of Agile and DevOps. The idea is to have all disciplines work together towards a common goal. Testing is about getting feedback, regardless of whether it is automated testing or not. So of course you have to test after the feature has been developed. But that doesn’t mean you can’t involve your QA team earlier in the process. Having testers involved in defining requirements, identifying use cases, and writing tests is a way to catch edge cases early and leads to quality tests.

Not Automating When You Can

Tests that run by the click of a button are a huge time saver, and as such they also save money. Any sufficiently large application can have hundreds or even thousands of automated tests. You can’t achieve efficient software delivery if you’re testing all this manually. It would simply take too much time. One alternative I’ve seen is to stop testing finished features. But due to the nature of software, existing features that used to work can easily break because of a change to another feature. That’s why it pays off to keep verifying that what used to work still works now. The better alternative to manual testing is to automate as many tests as you can. There are many tools to help you automate your tests. From the low level of separate pieces of code (unit tests) over the integration of these pieces (integration tests) to full-blown end-to-end tests. As a tester, you should encourage the whole team to be involved in manual testing. It will encourage them to write code that is fit for automated tests. Help developers write and maintain automated tests. Help them identify test cases.

Expecting to Automate Everything

As a counterargument to my previous point, be wary of trying to automate every aspect of testing. Manual testing can still have its place in a world where everything is increasingly automated. Some things could be too hard or too much work to automate. Other scenarios may be so rare that it isn’t worth automating, especially if the consequences of an issue are acceptable. Another thing you can’t expect to automate is exploratory testing. Exploratory testing is where testers use their experience and creativity to test the application. This allows the testers to learn about the application and generate new tests from this process. Indeed, in the words of software engineering professor Cem Kaner, the idea behind exploratory testing is that “test-related learning, test design, test execution, and test result interpretation [are] mutually supportive activities that run in parallel throughout the project.”

Lack of Test Environment Management

Test Environment Management spans a broad range of activities. The idea is to provide and maintain a stable environment that can be used for testing. Typically, we call such an environment a testing or staging environment. It’s the environment where testers or product owners can test the application and any new features that the developers have delivered. However, if such an environment isn’t managed well, it can lead to a very inefficient software delivery process. Examples are: ●  Confusion over which features have already been deployed to the test environment. ●  The test environment is missing certain critical pieces or external integrations so that not everything can be tested. ●  The hardware differs significantly from the production environment. ●  The test environment isn’t configured correctly. ●  Lack of quality data to test with. Such factors can lead to a back and forth discussion between testers, management, and developers. Bugs may go unnoticed or reported bugs may not be bugs at all. Use cases may be hard to test and bugs reported in production hard to reproduce. Without a good test environment management, you will be wasting time and losing money.

Unsecured Test Data

Most applications need a set of data to test certain scenarios. Not all data is created equal though. With modern privacy laws, you want to avoid using real user data. Both developers and testers often have to dig in the data of the test environment to see what is causing certain behavior. This means reading what could be personally identifying information (PII). If this is data from real users, you might be violating certain laws. Moreover, if your software integrates with other systems, the data may flow away from your system to a point where it is out of your control. Maybe even to another company. This is not something you want to do with real people’s data. Security breaches can lead to severe public image and financial losses or fines. So you want either made up data or obfuscated / secured data. But you also want to make sure that the data is still relevant and valid in the context of your application. One possible solution to this is to generate the data your tests need as part of your tests.     

Not Teaching Developers

The whole team owns the quality of the software. Pair with developers and teach them the techniques so that they can test the features as they finish them. This is especially important in teams that (aspire to) have a high level of agility. If you want to continuously deploy small features, the team will have to continuously test the application. This includes developers, instead of having them wait for the testers. In such a case, the role of testers becomes more of a coaching role. If testers and developers don’t work together closely, both will have negative feelings for each other. Developers will see the testers as a factor blocking them from moving fast. Testers will have little faith in the capacity of the developers to deliver quality software. In fact, both are right. If the two groups don’t collaborate, precious time and effort will be lost in testing a feature, fixing bugs, and testing the feature again. If the developers know what will be tested, they can anticipate the different test cases and write the code accordingly. They might even automate the test cases, which is a win for testers and developers.

Streamline Your Testing!

The major theme in this article is one of collaboration. Testers and developers (and other disciplines) should work together so that the software can be tested with the least amount of effort. This leads to a more efficient testing process, fewer bugs, and a faster delivery cycle. Top that off with good test environment management (which is also a collaborative effort) and secure data, and you have a winning testing process.

Author Peter Morlion

This post was written by Peter Morlion. Peter is a passionate programmer that helps people and companies improve the quality of their code, especially in legacy codebases. He firmly believes that industry best practices are invaluable when working towards this goal, and his specialties include TDD, DI, and SOLID principles.

Failed Service

5 Reasons IT Service Management is Failing

Today’s IT organizations are busier than ever. They process more data, employ more people, and empower more businesses than at any other time in history. This growth in IT power and responsibility highlights the necessity that IT organizations build upon good processes. Many organizations turn to ITIL and IT service management to provide structure to their IT organization. While ITIL is a terrific framework for managing IT organizations, it’s not a silver bullet. Simply knowing about ITIL and using it to structure your IT organization isn’t enough to ensure success.

If you’re concerned that your IT service management processes might be failing your team or your business, read on. I’ve laid out five red flags that will help you detect if IT service management is failing.

 

Failed Service

#1: You’re Not Properly Scoping Changes

A common mistake among IT organizations is failing to set realistic targets for success of new processes. This can take several different forms. All of them are quite damaging to your business.

One form is scoping that may be insufficiently measurable. For instance, leadership doesn’t provide any specific targets but merely sets a goal that things will “get better.” A goal that relies on relative measures of success like “getting better” means measurement will be subjective. Subjective measurements involve the perception of stakeholders, which can be easily swayed by variations in day-to-day service. You don’t want the business to perceive your team as failing because the CTO’s laptop just happened to have a faulty hard drive the day before an organizational review of Service Management objectives.

Another form is scoping that’s too ambitious. An example might be an IT service manager setting a service level agreement that says you’ll resolve all incidents in one hour. That’s not a realistic timeline. Setting unrealistic timelines for employees degrades morale and makes those goals seem meaningless.

The opposite problem can also be trouble for an IT organization. It’s no good to set goals that won’t accomplish anything at all. Setting a goal that’s too loose means your organization won’t need to change to improve and will fail to provide value to the business.

#2: You’re Using the Wrong Tools

While ITIL is primarily focused around creating good processes for your IT organization, tooling is still very important. Regardless of your role in your business’s IT organization, you need the right information at the right time to do your job. High-quality IT service management software is regularly underrated as a part of a good IT service management implementation. It’s not just about getting the right information to the right people. It’s also about making sure that software is easy to use for business users. An effective IT service management implementation puts customers in a position to succeed, even when other parts of the IT organization are failing.

One way to identify failing tools is by looking for common pain points. Spend some time with key users of your IT service management software. Do they regularly have a hard time finding things? Is their time spent trying to make sure they don’t “mess up” the software? Does the software itself suffer regular outages?

If you suspect that your IT service management tools aren’t living up to their promises, you might want to check out a new platform like Enov8. You may find that you can easily cover gaps in your processes with software instead of painful changes on the process side.

#3: You’re Thinking About Incidents Wrong

One way IT service management systems regularly fail their users is by focusing too much on fixing problems. I know, that seems like an odd response. The truth is that sometimes IT organizations can focus too much on fixing their own problems over solving problems for the business.

IT organizations regularly think about incidents as engineering problems while users think about them as an inability to get work done. I really like the analogy of a broken light bulb. An IT organization sees the broken light bulb as the problem. The business doesn’t see it that way. Instead, they feel that the problem is that they’re trying to work in the dark. Engineers might spend days trying to get a new light bulb to users while a much simpler fix would simply be to open the window shades.

IT service management works best when it focuses on delivering the results the business needs. Often times, that requires quality engineering, but it should never be the primary concern. If your team looks at a new incident and immediately jumps to figuring out the technical cause, your IT service management implementation is probably failing. Focus first on fixing the problem for the business before trying to fix the root cause.

#4: Your Processes Are Too Complicated

IT service management is about putting processes into place in order to solve problems for the business. This is a worthy goal! Unfortunately, lots of times organizations lose that vision in the day-to-day running of the team. Something goes wrong as part of incident response, so they add a new step to a process. That new step for the process solves one problem but creates another problem that isn’t immediately apparent. When a problem crops up from that new change, the team adds another step.

You can see where this is going. In trying to fix lots of little problems encountered by your IT service management implementation, you’ve created one big one. Your processes have become much too complicated. The consequences of over-complicated processes are numerous. Employees don’t know what to do while dealing with problems. Management can’t easily understand the state of any given incident’s response. The business is stuck suffering from open issues. Resist the urge to add a new part of the process every time you encounter a problem. If you’re in the habit of doing this, look for what steps of the process you can remove to simplify it.

#5: You’re Not Focused on People

ITIL books and training focus a lot on processes and systems. That’s necessary because people writing books or designing training don’t know the people in your business. But the truth of the matter is that those people are the reason for IT service management. The goal is to make their lives easier. It’s not about implementing a specific process or creating the perfect architecture.

At the end of the day, the true measure of success is whether your IT organization makes working for your business better. Successful IT service management implementations spend a lot of time thinking about their users. They talk with them and listen to the problems those users are facing. Unsuccessful implementations get bogged down by worrying about metrics and tweaks to the process.

The Hardest Part is Recognizing the Problem

Most IT service management implementations don’t fail because of malice. They don’t fail because of incompetence on the part of the team. Those implementations fail because the team didn’t recognize the warning signs of failure before it became entrenched within their system. The IT organizations pursued their implementation with the best of intentions but didn’t know they were headed toward failure. If you recognize some of these issues within your organization, it’s not too late to start fixing them. It’ll require diligence and critical thinking, but you can absolutely be successful.

Author Eric Boersma

This post was written by Eric Boersma. Eric is a software developer and development manager who's done everything from IT security in pharmaceuticals to writing intelligence software for the US government to building international development teams for non-profits. He loves to talk about the things he's learned along the way, and he enjoys listening to and learning from others as well.

Test Environment Management Tools Compared

Five years ago, if you were asked to recommend a “Test Environment Management” platforms you might have struggled.  In fact, you might have struggle to identify one, particularly if you would have considered your own DevTest teams’ behaviour. Lot of disruption, delays, misconfiguration and the inevitable use of Spreadsheets for tracking project bookings, MS Visio document for system information capture, Email for Reporting and perhaps if you were lucky, some test automation for platform health checks. Not exactly elegant nor scalable but undoubtedly better than complete chaos.

However, things have somewhat changed and with a raft of solutions now claiming to solve this problem, The Last Frontier of the SDLC, the question now is not “what” but “which” platform will meet our needs and address one of the SDLC’s biggest “Waste Areas”?

At TEM Dot we decided to compare six of the biggest players in this space across 10 key areas:

Key TEM Vendors

Key TEM Performance Areas

  1. Modelling
  2. Booking Management
  3. Coordination
  4. Ticketing
  5. Health Monitoring
  6. Automation & DevOps
  7. Data Management
  8. Reporting
  9. Extensibility
  10. Affordability

Test Environment Management Tool Scoring

Area-1 Environment Modelling

The ability to know what your Environments and Systems look like.

Historically think Visio or your CMDB (if you have one).

Gold Medal Position:                  

Enov8 & ServiceNow both offer powerful Visual CMDBs & Component / discovery mapping.

Silver:                

Plutora & Xebia offer modelling capability.

Bronze:              

Apwide & Omnium modelling is achieved via tabular forms.

Area-2 Booking & Contention Management

The ability to capture environment requirements & manage contention on Environments & Systems.

Historically think Email & an attached Word document.

Gold Medal Position:              

Enov8 & Plutora offer advanced booking & contention analysis methods.

Silver:                 

Apwide, ServiceNow offer booking requests (ref ticketing) capability.

Bronze:              

Xebia has no obvious environment booking or contention mechanism.

Area-3 Environment Coordination

Tracking Events & Release activity across space (Environments) & time (Month, Year etc).

Historically think a MS Project Plans.

Gold Medal Position:   

Apwide, Enov8, Plutora, ServiceNow offer Environment & Release based calendaring.

Note: Enov8 & Plutora offer Runsheets /Implementation Plans (respectively).

Service Now offers checklists.

Silver:                 

Xebia – Calendaring is release centric (opposed to environment centric).

Bronze:              

Omnium (limited capability identified).

Area-4 Ticketing

Ticketing / IT Service Management to capture Environment Change Requests Incidents etc.

Historically think Remedy.

Gold Medal Position:              

ServiceNow has advanced ITSM methods.

Silver:                 

Apwide (using Jira), Enov8, Plutora have solid Ticketing / Requests functionality.

Bronze:              

Omnium & Xebia dependent on other tools.

Area-5 Health Monitoring

The ability check Systems or Components or Interfaces are up.

Historically think Test Automation scripting or your server monitoring solutions like Zabbix.

Gold Medal Position:                  

Enov8 & ServiceNow offer integration methods & native agents to monitor health.

Silver                  

Apwide & Plutora have APIs that logically allow system health updates.

Bronze:               

Omnium & Xebia don't play in this space.

Area-6 Automation & DevOps

The ability to automate key Environment Operations using code.

Think Jenkins or Puppet Jobs.

Gold Medal Position:                  

Xebia is a powerful release orchestrator (its primary purpose).

Silver:                 

ServiceNow Orchestration automates IT & Business Processes.

Enov8 offers “agnostic” Scripting Hub (Visual Orchestrate), Webhooks & URL Triggers.

Bronze:              

Apwide integration is very simple but can be achieved with Get/Post methods.

Plutora needs other tools to automate/integrate properly (like Dell Boomi). The SaaS only option can also be limiting.

Omnium integrates with other tools to automate.

Area-7 Data Management

The ability to manage one’s data e.g. Extract Data, Masking data, Provisioning Data etc.

Think Compuware File-Aid.

Gold Medal Position:      

Enov8 seems to be the only solution for Test Data. Enov8 offers support for Data (PII/Risk) Profiling & Masking and Data Bookings through "Data Compliance Suite". Enov8’s Visual Orchestrate can also be used to schedule other Data Tools.

Silver:                 

Xebia & ServiceNow capabilities are limited but they can leverage their orchestrators and call other tools.

Bronze:              

Apwide, Omnium & Plutora don’t appear to play in this space.

Area-8 Reporting

The ability to get & share insights about your Environments.

Historically think drawing pretty pictures & graphs with PowerPoint.

Gold Medal Position:                

A lot of the tools have solid reporting; however, focus is Environments: No Gold Medal yet.

Silver:                 

Enov8 seem to have best out-of-box Environment dashboards. Needs simpler customization.

ServiceNow Env Dashboard are limited but ultimately extensible.

Xebia have some solid report, but more deployment focused.

Plutora is reliant on a new “Tableau” extension. Getting there but seems disjoint.

Bronze:              

Apwide leverages Jira’s native capabilities.

Omnium approach is somewhat “download/export” focused.

Area-9 Extensibility

The ability to have the product do whatever you want.

Think of Salesforce or SAP.

Gold Medal Position:                

ServiceNow – An Extensible Engine. You can use it to build anything.

Enov8 – An “Object Oriented” Extensible Engine. You can use it to build anything.

Silver:                 

Plutora has broad customization features so you can “partially” alter its behaviour.

Bronze:              

Xebia allows customization of your processes but not the platform itself.

With Apwide & Omnium you basically get what you get.

Area-10 Cost

The money ball question. And potentially the most important for some.

Gold Medal Position:           

Low Cost of Entry – Apwide, Enov8 (Free Team Edition) & Omnium

Silver:                 

Medium – Plutora & Xebia

Bronze:             

Expensive - ServiceNow (Just add another "0" for licensing & tailoring services)

The "Test Environment Management Tool" Score Card 

Test Environment Management Tools Comparison

Overall Test Environment Management Platform Rating

Final TEM Tool Positions

Position

Player

Findings

#1

Enov8

Very much a Test Environment centric solution.

#2

ServiceNow

An extensible ITSM solution, expensive but powerful.

#3

Plutora

More focused on Release Planning.

#4

Apwide

Simple & Elegant TEM/Release tool that has its place at the table.

#4

Xebia

More focused on Continuous Delivery

#5

Omnium

Inexpensive and will be the right fit for some.

Note: Scoring was limited to the ten key areas recognised by TEMDOT as the most important for successful Test Environment management. The scores do not reflect broader functionality i.e. functionality that may be deemed more important for your organization. If you feel there are inaccurate statements in this comparison or a tool missing, please reach out using our contact form.