The future is PaaS

Published on: 24 December 2012
  • By Loïc Calvez
The future is PaaS
folder icon POSTED IN
Strategy

I was participating a meeting with a customer with other vendors and he challenged us in delivering his vision of PaaS: a standard infrastructure he could own or rent on which standard vendor systems could be uploaded. This was interesting for me because it was the first time I heard a client describe so clearly his future in way that aligned with what I was wishing for when I was in operations.

I feel we have already crossed the first steps to get there. An easy example is virtual appliances, if you are running an infrastructure compatible with the vendor specifications (typically one or two options of hypervisor of a certain version), you can just download a file from the vendor and upload in your infrastructure. You go thru a quick wizard and voila, your appliance is running. Compared to the two standard options, racking and stacking a physical appliance or installing a Windows or Linux VM with Operating System, Database…, this is much easier.

So what are we missing?

Today those virtual appliances have to be built and targeted for a specific hypervisor. To help vendors realize more benefits from those appliances we need to be able to have “standards” (I use that term loosely) that will allow a “standard” virtual system to be uploaded to a “standard” hypervisor. Think of it the same way you would with a standard datacenter rack. You have a 19″rack with 2U free, the server literature tells you the server will fit in a standard 19″ and requires 2U, you’re pretty much done (you still need to check for power, connections and all, but you don’t need a special rack for each type of servers).

The other issue today is that virtual appliance is typically a single VM, not a full system. In the future, you should be able to upload a full system to a full virtual environment. The wizard will ask you simple questions (How many active users? Do you require high availability? Disaster Recovery? What IP range should I use? …) based on the answers it will spin the required number of VMs (web servers, application servers, database servers), define load balancing rules and spread the VMs to achieve performance and availability targets. It should also be able to adapt dynamically (for example, spin up or spin down VMs to adjust to events (Monday morning, hardware failure).

Another way to help realize the vision is to remove some components from the system and move them as an overall service layer. An easy example for this is databases, why should every system come with its own database? Why not have a company wide “data service”? It could receive a “standard” SQL query and return the data in a standard format. Same could be done for web servers, load balancing…

Some vendors are getting close. You can purchase their high end special machine, upload your vendor specifications and you end up with a working system. It still has a high price point, you are locked in and it has very limited options, but it shows that it can be done (and that it is coming). Other vendors are working at it the other way, if you build an application in their system, you can consume queuing or database services without having to worry about how those services are provisioned. Once again, already working, but locked in and expensive.

So why is this taking so long?

Well, the list is long. It starts with the vendors. Making it easy for you to switch is not in their best interest, locking you in is much cheaper than constantly providing value (but to their defence, trying to make something that will work with everyone all the time is a pretty tall order). Then there are the internal IT departments, a lot of them fear that future. They are afraid they will become irrelevant. If you have a self-tuning data service, why would you need a Database Administrator (DBA)? What they fail to realize is that someone will always be needed, but you need to evolve. If you were in the business of selling ice cubes to keep things cold, the refrigerator was very bad news for you. It gave you 2 options, keep fighting the trend and risk becoming irrelevant or find ways to leverage all your experience in cold stuff and move into the refrigerator business.

Conclusion

I am pretty sure most companies will land there simply because it makes sense. Companies want to focus on what their business is about, whether it is to manufacture or sell widgets, enhance lives with service or whatever else. For most, IT is required to run the business, but IT is not a key differentiator (but it could be!).  Simplifying IT, making it easy to move some portions to a public cloud, keeping some internally in a private cloud, those are all good things.

Bottom line: The good news is that it is coming (more and more vendors are starting to think in that fashion), the bad news is that it probably won’t be in 2013 for most.

What do you think?

Subscribe and Get The Latest News

Related Posts

There’s been a lot of press about cybersecurity insurance in 2022, the main reason being that crimes are up, so payouts were up, and as we all know, insurance companies don’t like to pay (prevention...
You may be aware of the importance of cybersecurity and other safety measures businesses need. Unfortunately, if you believe firewalls and VPNs are enough to safeguard your network, you need to...
While technology such as your IT infrastructure has many potential benefits for your business, it requires meticulous management to ensure it doesn’t create risks and problems. One essential task you...