Jonathan Thorpe Archive

It’s been years since agile methodologies went mainstream.  Recently, with DevOps and Continuous Delivery/Deployment, we are now able to work in an agile way from Dev right through to Ops, instead of just Dev and Test working in an agile manner. I am constantly surprised when I hear of organizations not using their new found agility to reduce batch sizes and deliver smaller amounts of quality functionality more frequently.  Also, I often hear about companies who keep their release cycles long and try to put more into a release.  I’m sure we have all experienced big software releases with many issues, either as people involved in the project or as users.

I’m aware that many of you reading this may be thinking, “This guy doesn’t get it.  We work with legacy code; the components are tightly coupled” or “Our customers don’t want frequent changes; they like less frequent large updates.”

I’ve worked in environments where these arguments could be made and I understand from a customer point of view that frequent change could be a bad thing.  As a customer of enterprise software, I have valued stability, avoiding major changes that would result in planning how to train users of new software versions frequently.  After all, people are employed to do a job, not to spend hours learning how to use a tool over and over again.

A good example of this would be UX redesign.  If there are frequent changes that fundamentally change the way users interact with a system and require user training, then of course there will be complaints about frequent changes.  The easy way out would be to deliver these types of changes in one large batch instead of delivering many small changes over time.  This loses the advantages of agile development and introduces unnecessary risk.

A better way would be feature flags allowing code to be enabled or disabled and changes continuously merged into a codebase.  This isn’t revolutionary; people have been doing it for years and have delivered high quality code in incremental pieces without a big bang integration at the end.  With a bit of forward planning and refactoring code, it is possible for functionality to be delivered incrementally, even in legacy codebases and on a customer-by-customer basis.  This will allow you to get new features out to your customers and prospects and be even more competitive.

There are a growing number of examples of this and while I won’t be at DevOpsDay LA on Febriary 21st, there is an excellent session I would really like to attend.  Jody Mulkey of Ticketmaster will present a session called “Legacy is not an excuse: DevOps success in the enterprise.”  Jody will present on re-architecting Ticketmaster’s decades-old ticketing platform.  I’m hoping it will be a solid example of how changes can be made in smaller batch sizes in the enterprise.

The time when delivering changes in big batch sizes is coming to an end.  Can your organization afford to be one of the last to make the move to smaller batch sizes?  If you are evaluating changes to tools or processes, I believe it is wise to assume that delivering code in smaller batch sizes much more frequently is coming sooner rather than later.  IF you aren’t delivering small batches of changes frequently now and aren’t planning to do so in the near future, at least design new systems or implement new processes and tools with these principals in mind.  You will be glad you did sooner than you think!

You can learn more by attending the latest DevOps Drive-In webcast on continuous delivery in the enterprise on February 19th.  Bola Rotibi, Research Director from Creative Intellect Consulting, will be our guest speaker and will share best practices for achieving continuous delivery in the enterprise.  Learn more and register.



So you think you have an excuse not to practice continuous delivery…

In the January 2014 DevOps Drive-In webcast, Gene Kim and I discussed DevOps frequently asked questions.  I think that we provided a compelling case for adopting a DevOps mindset in your organization.  I even wrote a short blog post on three ways to get started.

The question I have for you is: why stop at DevOps?  Call me crazy but I think this whole “Continuous Delivery” thing sounds like an exciting adventure that could very well bring you fame and fortune in your organization.  Ok, so the fortune part might be an exaggeration but hopefully you get my point.

On February 19th at 9am PST we’ll be discussing Continuous Delivery with Bola Rotibi of Creative Intellect Consulting.  Bola is the author of a report on Continuous Delivery and why it is applicable and important to people like you!

I’m a big believer in having a small set of key takeaways from a webinar.  So, in a nutshell here are three key things we will distill about Continuous Delivery:

  1. Key challenges for adopting Continuous Delivery
  2. Attributers and inhibitors to Continuous Delivery in enterprise
  3. Top guide points for enterprises

We hope you will join us for the webcast and that by the end of it you’ll be thinking of lots of fun ways to bring Continuous Delivery into your organization.

Register now!



There are times when I hear things about DevOps which just don’t seem to make much sense.  Sometimes the statements are quite destructive and, in my opinion, make Dev and Ops collaboration so much harder.

Recently, I heard yet again that DevOps will result in Dev taking over Ops responsibilities, essentially reducing the need for people in Ops.  I don’t agree.  The role of staff in both development and operations is changing.  Operations staff are more likely to have development skills and spend more time automating tasks.  If you think about it, this makes sense.  Giving developers self service access isn’t about giving up control.  In order to give self service access, the controls will still be in place. But, instead of being done each time by a human gatekeeper in operations, the control will have been implemented in processes and automation managed by people who were once gatekeepers in operations.

Operations is a very specialized role.  The tasks Operations staff work on are not so trivial that they can be simply handled by any developer.  If anything, I see the role of operations becoming even more important in the future.

For more, take a look at this fun 2-minute video of what DevOps is all about.  You’ll see that DevOps is about Dev and Ops collaborating and working together.

Tags: DevOps


The DevOps Drive-In FAQ webcast with Gene Kim was a great success.  As usual, Gene provided a huge amount of valuable information.  I came out of the webcast feeling even more positive about the future of DevOps, which will surprise some people as I’m sure they didn’t think I could become more enthusiastic.

There were a lot of excellent questions which may well inspire future blog posts, but for now I’ll reiterate my thoughts on three tips to get started with DevOps from the ground level.

  1. Determine what outcomes you want to achieve and how to measure them.
  2. Run a pilot project.  You don’t need to adopt new processes all the way to production. Even in pre-production environments the value could be huge.  Over time take your new processes all the way to production but don’t get hung up on not being able to go to production.
  3. Accept that there will be failures.  Learning how to recover from failure quickly rather than focusing on adding more and more layers of process and approvals will  help reduce risk and let you deliver value to your customers faster.

For those of you who missed the webcast you can view the recording here.

Finally, a big thank you to Gene. It’s always a pleasure chatting with you.

I look forward to seeing you all again at the next DevOps Drive-In!

Tags: DevOps


Recently, I was talking to a financial services company during one of Serena’s DevOps Drive-In webcasts.  It was a wonderful story of the evolution of a release process over time and the benefits that were realized.

The company went from having multiple teams that did release management in a slightly different way and communicating via email, to communicating changes via SharePoint, and finally having both the process layer and automation captured in a single solution based on Serena technology.

I’ve worked in a release team myself and I can totally relate to what was said in the webcast.  Having multiple teams all doing things in a slightly different way is extremely inefficient and also encourages mistakes to be made.  In my personal experience, when there are multiple releases coming out and I’m busy, tired and under pressure, it’s difficult to remember the slight variations from one product to the next.

I also understand the company’s idea of moving to SharePoint to help solve part of the problem. I seriously considered it but it is, at best, a loosely fitting Band-Aid, not a long term fix.

I’m sure you might have heard me say or write that automation alone isn’t enough.  This webcast was a wonderful example of that.  I could go on but it’s much more powerful for you to hear this from the customer.  There are two versions of the recording, a short 12 minute version and the full recording.  Both are well worth your time.



phoenix-projectDevOps author and researcher Gene Kim will be my guest speaker for the next DevOps Drive-In webcast on January 22.  He will share his most frequently asked questions and likely tell us some great stories as he answers them.

Gene is the author of The Phoenix Project and has been studying high performing organizations for many years.  For those of you who haven’t read the book: if you have worked anywhere remotely close to IT Operations, be prepared for a great read and maybe a little bit of PTSD as the memories come flooding back.

With so many people eager to share their DevOps stories with Gene, he has great insight into the DevOps movement and what it means for enterprises.  I’m really looking forward to this event and hope you can join us.  Register for the webcast!

Tags: DevOps


DevOps is something that is talked about frequently but what does it really mean? How would you react to the folowing statements and questions?

  • DevOps is new and revolutionary!
  • People been doing it for years, right?
  • DevOps practices work best in organizations that provide SaaS
  • DevOps is also for organizations that don’t do WebApps
  • Is it some kind of weird thing from Europe?

What if I said all of the above were at least partially true?

Now that you are suitably confused, you are in the right mental state for DevOps to be explained to you in Serena’s new two-minute DevOps video.  Click on the image above and find out what it means to accelerate the application release process by bringing Development and IT Operations into wonderful harmony.



2013 has been an exciting year in the evolution of the DevOps movement, and at Serena we predict even more exciting developments in 2014.  Based on information collected from conferences across the globe and from our customers, we put forward three DevOps predictions for 2014:

Prediction 1: IT organizations realize that DevOps is more than just automating deployments.

At DevOps conferences worldwide there has been a strong emphasis on addressing culture, as well as automation, in order to be successful.  Conversation is usually around CAMS, not AMS.

  • Culture
  • Automation
  • Measurement
  • Sharing

Outside of what I refer to as the “DevOps bubble,” when DevOps hits the mainstream, the Culture part of CAMS seems to get lost in translation and the focus is on Automation as the cure.  I’ve seen a steady increase of enterprises participating in DevOps events. People are realizing that a successful DevOps initiative takes more than just automation…it also requires addressing the coordination, collaboration and trust amongst the teams that participate in the application lifecycle.

In 2014, enterprise IT organizations embarking on DevOps improvement initiatives will look for ways to address both the process and the people part of the application lifecycle.

Prediction 2: Industries that are traditionally slower to change will now lead in DevOps adoption.

We are noticing a lot of interest in DevOps from the financial services and retail industries, where enabling consumers with technology and evolving its capabilities quickly can be a significant competitive advantage.  Competition is intense due to customer expectations.  In order to be flexible enough to meet those needs and transform business, DevOps is key.  Traditionally, these industries are seen as conservative and risk-averse. The risk they now face is not transforming their technology offerings quickly enough.

In 2014, look for exciting technology innovation from the financial services and retail industries as they increase their ability to deliver innovative services quickly and with less risk.

Prediction 3: Even more spectacular software release failures.

Even though they are necessary, fundamental changes made in the way large IT organizations are releasing software are bound to result in some high profile failures before the process gets totally under control. We’ve seen the BART system grind to a halt after a failed updated, Knight Capital go bankrupt from a bad release process and countless other notable failures. While we never want to see a failure that reflects badly on the technology industry, we expect some high profile glitches along the way to DevOps nirvana.

In 2014, keep an eye out for software release failures…these are likely from the enterprises that are pushing DevOps improvement initiatives the hardest!

What are your thoughts? Do you have any DevOps predictions of your own?  I’ll circle back mid-year or so and see if my projections are on track to becoming reality in 2014.  Happy New Year!



DevOps Drive-InEarlier in December, Kurt Bittner, Principal Analyst at Forrester Research, Inc., participated in the December DevOps Drive-In webcast, “12 Ways of DevOps.”  A couple of the “12 ways” were thought-provoking for me.  When people talk about DevOps and Continuous Delivery, it is frequently in the context of applications that are hosted in the cloud and are relatively easy to iterate quickly.

Hypothesis-Driven Development

I find the idea of hypothesis-driven development extremely appealing but achieving this outside of cloud-based web apps would appear to be challenging.  If you are fortunate to have a well-functioning, responsive customer advisory board, then it would appear that you are in relatively good shape to do hypothesis-driven development.  Sure, it takes a bit more coordination and scheduling but it is certainly possible, albeit probably with a small sample size.

I’m hoping that as DevOps and related methodologies are practiced in more traditional environments, there will be good examples of how people have successfully managed to do hypothesis-driven development for legacy-packaged or on-premise applications.

Working in Small Batches

I have seen this become a problem even when a product is hosted in the cloud with one codebase for all customers.  Just because code is in the cloud, doesn’t mean the codebase is structured in a way that can be worked on in small batches.

Even if you are able to work in small batches, unless these smaller batches are released frequently, then you really aren’t getting the benefit of fast user feedback on your product changes. Opportunities to mitigate risk are being missed.  I’m hoping that during 2014 there will be more success stories around working with legacy apps in a way where getting fast user feedback to product changes is being done successfully on a large scale.

Serena’s next DevOps Drive-In webcast will be on Enterprise DevOps: Implementing Self-Service Application Deployment with Serena Release Manager on January 16.  See more details and register.



Configuration Management CampIn February, Infrastructure as Code takes center stage in Europe. On February 3rd and 4th in Gent Belgium there is a Configuration Management Camp. As expected the leading providers of infrastructure as code solutions will be there. There is a lot of innovation in this space and these solutions fit in nicely with Serena Release Manager. Those of you who follow the DevOps tools space might be wondering where Serena’s tools fit in with these solutions.

I recently watched a webcast by the Serena team in the UK. In the webcast Kevin shows how to use Chef Solo with Serena Release Manager v5 along with Serena’s automation module to configure a server and deploy an application to the server.

I have written previously about how I see these tools fitting into the Serena stack. In the run up to Configuration Management Camp EU I’d like to spend some time putting together a simple example of how to use one of the other configuration management solutions with Serena Release Manager v5 and blog about the experience. I’m considering:

  • Ansible
  • Chef
  • CFEngine
  • Puppet
  • SaltStack

If you have a preference for which of these you would like to see me use with Serena Release Manager email me at jthorpe (at) serena (dot) com and I’ll choose whichever has the most votes. By the end of the example you will have a clear understanding of why these tools compliment Serena Release Manager and are not a replacement for Application Release Automation.