Mon, 19 June 2017
We discuss how Microsoft Azure is catching up with AWS. It seems to become more compelling each year with new services like Azure Stack and Azure Container Service. In 2016, AWS grew at the same pace as the market, but Microsoft Azure grew much faster. Microsoft launched Azure 5-6 years after AWS, so AWS has an enormous lead. Now, Microsoft Azure is at least competitive with AWS in every area. This allows the two to compete over pricing. Enterprises do not automatically choose AWS anymore. They now research what the right cloud is for the organization. Many organizations are also embracing multi-cloud strategies. This means using feature capabilities of multiple public clouds for different pieces of the enterprise. For instance, it saves money to run Microsoft services on Azure, so that is often a feature that gets separated in cloud strategy.
Mon, 22 May 2017
We discuss Austen’s early bet on serverless computing from the first time he saw AWS Lambda. Serverless, even in the early days, has many benefits. It is microservice-based, event-driven, requires no administration, and has a compelling “pay-per-execution” pricing model.
Serverless was launched as an application framework. The problem with serverless computing today is that if you want to build a sophisticated system on this type of service, you’re dealing with lots of independent units of deployment. One application is a combination of many Lambda functions. Dealing with this all-together, not to mention the event-driven computing, can be chaotic. Serverless offers a simple file that can define a serverless application. The framework provisions all the infrastructure for you and the app is up in seconds.
We are still in the early phases of serverless computing and the trend is still yet to be defined. It’s impressive how fast the cloud providers are moving with serverless computing and building new features around it. Adoption from enterprises has also been fast. The challenges of serverless computing are that there are a lot of changes at once for an organization to adopt it, and this often requires cultural shifts as well. Serverless computing requires a new way of thinking for enterprises, which is a challenge. But the for enterprises who embrace it, the gains are worth it.
Tue, 16 May 2017
We discuss the changing face of large enterprises when innovating with technology. The technology we see in big web companies from Facebook, Google, and Amazon is absolutely going to be used to reinvent how large enterprises function. But large enterprises do not need to transform into tech companies like Google to be successful. More likely the opposite is the case. Enterprises need to realize that they already are a great source of innovation and that with a focus on customers and on technology they can lead the way to success. It does not have to look exactly like Google for large enterprises to be innovative.
Figuring out what you want it to feel like is the hardest part for large enterprises. If you’re a traditional tire company, for instance, you know the tire industry but you don’t know what it feels like to be a technology company that moves quickly and safely. So how do you get the people inside the tire company to know what it feels like to move fast? How they can apply that to tires? Knowing how the business works is incredibly important and these enterprises know their markets better than anyone. The trick is to teach them how to use technology to enhance the business they already know.
Chef is a company built around automation. It began with infrastructure automation and has now added other products. Chef found bottlenecks at security and compliance, which led to InSpec. InSpec allows you to include compliance within code so you can continuously test and ensure you are compliant with standards. Another new Chef product is Habitat for application automation. Habitat acts as a smart supervisor who can build and release the application and manage it as well.
Wed, 10 May 2017
We discuss the founding story of CloudHealth, testing ideas to find the right problem to solve. We look at how Joe took the company from idea through finding early customers and fundraising. Joe made sure early on not to get attached to ideas, but to define key hypotheses and converge to the real opportunities through testing. As he became more confident in what he was building, he began to write more of the code for it. We look at why successful entrepreneurs need to be willing to embrace contrary opinions.
CloudHealth does cloud service management. They deliver a SaaS-based single pane of glass, single pane of governance for managing the full life-cycle of applications and infrastructure across public and private clouds. They currently have four products: Amazon, Azure, Google, and a Data Center product. Each provides integrated reporting, recommendations, and active policy management. The policy management does not just monitor changes that deviate from your internal policies, but drives active changes to your environments to keep them in compliance. It works like a control plane that sits on top of everything you use to manage different environments.
A typical management suite in the cloud consists of 10-12 different tools as well as multiple different cloud environments. CloudHealth allows you to configure them all in the platform, they collect all the information that resides in those different integrations of cloud environments, and bring it back into one console in terms of what the data means and how it interacts. CloudHealth then provides integrated reporting, integrated recommendations, and active policy recommendations. With a click of a button, you can determine what it would take to integrate different tools and what provisioning the integration requires. This makes managing the cloud much more streamlined and cost-efficient.
Thu, 4 May 2017
We discuss what serverless computing means for OpenStack private clouds. It is time to recognize that hybrid is here for a long time and we will be mixing public clouds with private clouds in the long run. We also look at Red Hat’s recent deal with AWS for OpenShift. This is another example of coopetition with AWS, which has sought out many more partnerships lately. Vendors are finding more opportunities to partner with AWS to prevent themselves from losing customers.
Thu, 27 April 2017
We discuss how sometimes you can find more impactful insights from smaller boutique research firms than the larger giants. Aragon Research is a full spectrum industry analyst research firm who provides advisory services to those who are building, buying, or investing in emerging technologies.
We take a look at the latest announcements Amazon CTO Werner Vogels made at the latest AWS Summit. We look at new SaaS contracts in the AWS Marketplace allowing smaller SaaS companies to outsource their billing to AWS.
Amazon is a company that seems to keep doing things right. They are hard to avoid as leaders in cloud computing right now. Even when they make mistakes they seem to be able to pivot them quickly into useful tools. They own somewhere around 80% of the public cloud market at the moment and it is no surprise why because they have the best technology.
We also look at Amazon CodeStar, their improved database services, and upgraded machine learning tools such as Amazon Rekognition.
Amazon Rekognition uses machine learning for image detection to automatically monitor content. This allows us to identify objectionable images automatically. This has use-cases anywhere from identifying fake news to preventing issues with advertisements on objectionable content. Amazon is using machine learning to look at images and rank them on a 9-point scale of how objectionable the image is.
Rekognition is a deep learning service, meaning it is built on several layers of neural networks, the first layer being feature recognition, the next a classification of objectionable content, and so on. As of now, Amazon has built in a standard scale, but it would be interesting if they let users choose their own parameters for what is objectionable. Giving context for content is a difficult step in the process, which would be an interesting next move for Amazon also. There needs to be an objectionable rating for users so that the system can learn individual preferences as well.
We also discuss data privacy, and how zombie cloud data can haunt you when you think it has been deleted but it still exists somewhere else. There are legal issues around this zombie data that are being exposed now even with subjects like student standardized testing. Of course, sometimes this can be a great feature when you accidentally delete something and there is a way to find it again.
Thu, 20 April 2017
We discuss how a lot of big players in tech from the past are now gone, and this trend makes us look closely at the big tech companies today whose growth is slowing, such as Oracle and IBM. They seem to have lost their disruptive edge and struggle with the new business models they compete against, but experience tells us they are definitely survivors. They’re making money around a legacy business that we still live with today, but that is diminishing with the widespread adoption of the cloud. If they do move into the cloud, which they have taken steps towards, it doesn’t necessarily make sense because they would be taking sales from their own legacy technology.
IBM is now reinventing how to work within this new type of model that Microsoft and Amazon have built. IBM is trying to play catch up on the growth of the cloud and figuring out how to make money on it. With Watson and IoT, they are doing fascinating stuff in this space which could launch them into a hybrid cloud model with various customers, but there is no question they are struggling.
We look at how AWS is looking to simplify building and deploying apps on the cloud platform with a new service, AWS CodeStar. This service makes it simpler to set up projects by using templates for web applications, web services, and others. Developers can provision projects and resources from coding to testing to deployment. It seems to be yet another service AWS is providing that Microsoft and Google don’t have, which further solidifies its leadership in cloud computing. Amazon is great at targeting clients at all levels, from large enterprises to capturing the minds and hearts of the tech geeks. With AWS CodeStar, they aim to make it easier for developers to build applications on the cloud. 2017 is the year of the cloud land grab. Each vendor is trying to get as many people onto their platforms as possible, and AWS is trying to convert anyone and everyone.
AWS CodeStar was launched in response to enterprises facing the challenges of agile development software processes. The first challenge for new software is often having a lengthy setup process before they can start coding. The ability to pull this stuff out to the cloud and get up and running quickly is a huge strategic advantage, especially considering enterprises cloud take years to set up processes like these.
When an enterprise decides to switch to agile development or DevOps, there is a huge initial infrastructure setup involved to get the ID setup, the code repository set up with right security, and getting the build and deploy systems created. By Amazon offering this in a box, giving end-to-end solutions including integration with JIRA for bugs, that will be huge for saving time and headaches for clients. It would not be a surprise to see the other cloud vendors start to imitate CodeStar now that AWS has raised the bar.
Amazon has in a way become the new IBM. They are now the 800 pound gorilla that others strive to catch up to. When you look at their total capacity, they have close to fourteen times the data capacity of the next five vendors underneath them. It will be hard to catch up with them. Other vendors often claim to have faster growth than AWS, but that is only because they are playing on smaller fields and AWS is so big there is less room to grow.
Microsoft is now offering Cloud Migration Assessment services, which walk clients through an evaluation of resources they currently use to determine what to move to the cloud and what it will cost to do that. AWS offers a similar tool, and it’s clear that both tools will be used to promote their internal products. It may be a useful tool to determine the cost of migrating to the cloud, but it’s important to remember that everybody’s needs for the cloud are different, so enterprises need to focus on what their particular needs and requirements are.
No technology negates the need for good design and planning, and cloud is no exception. The pricing of moving to the cloud is dependent on how your particular IT is structured. Cross-vendor pricing must be considered also. Your particular configuration will depend on your requirements and what vendors you are piecing together. These tools often underestimate the cost of what the migration will be, they would rarely overestimate the cost.
Thu, 13 April 2017
We discuss the transition from the popularity of Service-Oriented Architecture (SOA) starting in the early 2000’s to today’s microservices. SOAs were API-based which made it easy to pivot into cloud computing. With SOAs we used to think reuse was a goal, but with microservices it has become much easier to replace everything. With microservices you are often trying to decompose as much as possible to increase speed.
We look at Pivotal and how they help large companies get good at software. They do that with agile methods. Pivotal Labs used to help large companies like eBay, Twitter and Google get ramped up on agile development. There is a lot of money going into disruptive technology today and everybody is at risk of losing their place to someone who does software better. It changes the game for these large enterprises who realize that if they can get their software skills competitive, their advantage of history and supply chain comes back to the forefront. The service experience now starts with software. Pivotal is a trusted advisor to come in and help big companies make these changes.
We also explore updates in Cloud Foundry and Pivotal Cloud Foundry. Last week an update added full Windows support under the cover of Cloud Foundry. Pivotal has been working on not only bringing in the Windows ecosystem, but also supporting more types of workloads, container networking for container communication, and persistent storage. Pivotal is checking the boxes from traditional PaaS to a cloud-native platform which runs anywhere and brings more types of workloads in. There are more types of applications because most of these companies need to improve their software capabilities, but do not necessarily need to get good at infrastructure internally. Offering more types of apps and services in more places to help these enterprises improve their software practice is important. These are all ways to make developers more productive while also simplifying the ops burden, which is not easy. Learn more about Cloud Foundry at the Cloud Foundry Summit in June.
Microsoft recently acquired a small startup, Deis. Sometimes acquisitions of smaller companies can be more of a mystery than large ones. Deis powers Kubernetes on Azure, so it looks like this acquisition is most likely a mix of an acqui-hire and a purchase of technology relevant to Microsoft as they improve their Kubernetes capabilities. It seems they are doing everything it takes to get as many interesting workloads on Azure as possible. Large companies like Microsoft, AWS, Google, and IBM can’t move fast enough, so getting the technology they need to differentiate themselves will start to happen more and more. It looks like we should expect to see many more acquisitions to come and further consolidation in the industry.
Last, we look at cloud security and whether we need to encrypt everything going forward. Many major websites already encrypt by default, encryption and multi-factor authentication need to be used everywhere. If we do encrypt everything, it adds latency in how information is moved from place to place and it annoys people to have to do extra steps. But we seem to be moving in this direction quickly. On the cloud, there is no performance latency as we move in this direction anymore. By default, we should encrypt everything from now on. Assume every piece of information from now on is going to be hackable.
Thu, 30 March 2017
Data integration is going through a Renaissance at the moment just like application integration did a few years ago. The primary driver for these changes is the movement to the cloud. When you turn back the clock, application integrations used to be simple. There were a few ERP systems, and as things started moving to the cloud via SaaS, application integration changed quite a bit. Now we’re seeing analytic applications move to the cloud as well. The biggest force of gravity is the mass movement to the cloud. That is also driving data integration. By definition, we’re now replicating platforms as much as we are moving into them. Enterprises that are moving into the cloud need a parallel data application integration strategy to make use of your data as an asset and to make sure the overall plan will work.
Data is frustrating to deal with. The problems seem easy and the reality is hard. Normalizing data, merging it together so you can get meaningful results out of it, making sure your SLAs are met in terms of freshness and quality are all easy to describe but are frustratingly hard to do. One of the challenges we have for on-premise systems is there has always been a goal to democratize data. The challenge is access. It sounds easy, and the cloud inherently breaks this democratization barrier. There is now a public place where people can go get access to data. That centrality is driving a lot of innovative capabilities we are seeing emerge.
Going from legacy applications to cloud requires a different approach to application and data integration. One starting point is choosing a cloud platform and trying to pick the right one. This can be the toughest decision a customer faces. Within each cloud platform there are also myriad choices, from which query technology to use, to whether to use Hadoop implementations on Azure, and more. There are so many choices and it’s easy to feel like you’ve made the wrong one.
Working in enterprise IT, there is a shift in the view of how data should be managed, accessed, and manipulated. In the early days, it was a developer’s job. IT was the team who knew how to access data and application systems. Self-service pushes some of the responsibility out to the edges. It’s not about wanting to move away from IT, but it’s about empowering those who who have the domain knowledge and will eventually be using the data. How we access these systems is different now. The structure of the data is also different. It used to be row and column-oriented. Nowadays, with REST and JSON, being able to handle the formats and not create new barriers so you are not manhandling the tool into dealing with data in unnatural ways. That will make developer lives easier and make it easier for vendors to add connections to the data.
Databases used to be relatively simple. These days it’s often special-purpose databases such as in-memory, object-based, hierarchical, and relational databases. In some ways this makes things more complex, but with layers of abstraction it can also make things easier. When we only had relational, you had to make data look relational whether it was meant to or not. Now you do not need to force anything to fit different data sets.
SnapLogic is beginning to leverage AI and Machine Learning. Their focus is on making computers work for you instead of making you work for computers.
Wed, 22 March 2017
We discuss the birth of Kubernetes at Google and how it started as part of Google’s own best practices and infrastructure, then was created into an open-source project and shared externally. It was a brand new code-base, but built based on Borg, which was an internal system at Google for over 10 years. That is why Kubernetes has a lot of design aspects of a more mature project. It is a combination of Google’s past work on containers and the new container ecosystem related to Docker so that Kubernetes can work outside of Google in any environment. Anyone can get involved and start contributing to the system. There is no enterprise version of Kubernetes, there is just Kubernetes and the different vendors and providers add value to it.
We look at the announcements from Google NEXT. There were no Kubernetes surprises because it is developed in the open, so everything you want to know about it is actively tracked on GitHub. Instead, at Google NEXT, they tried to add clarity about why Google is working on specific features like dynamic storage provisioning and custom schedulers. It’s clear Google is evolving into the enterprise cloud space and they are committed to staying ahead of the curve. For Google Cloud, that means opening up some of its core technologies such as TensorFlow and Google Cloud Spanner so that enterprises can begin to consume Google technology.
We look at Amazon’s recent decision to allow customers with Reserved Instance contracts to subdivide some of their Linux and UNIX virtual machine instances and still keep their capacity discounts. This decision was made so that Amazon could keep up with Google on flexible pricing. These types of changes are evidence of a healthy marketplace. A lot of times people try to mimic the spending habits of traditional enterprise IT which require a three-year roadmap, which is the opposite of what cloud should be. Cloud should be a dynamic world where you can scale-up and scale-down, which is why discounts should kick in based on behavior and actual usage. It’s also why Google has per-minute billing. In this way the pricing structure matches the dynamic needs of cloud computing.
Last, we discuss how small to medium-sized businesses in India can get subsidies for cloud adoption, while the US wants to tax its usage. This may be an opportunity for the US government to offer tax breaks for cloud adoption to incentivize change so that we can use these resources more positively. Some enterprise clients view cloud computing as a complex expensive endeavor. In essence, they’re right, they have archaic infrastructures and it takes a lot of work to move them to cloud, so some of the work it takes can remove the incentive to make the switch. In order to get them over the hump, they need to understand the cost savings, but it seems like a step in the right direction to provide tax incentives much like buying a Tesla can be subsidized. That should at least send the right message to companies in the US. Cloud adoption reduces data center space, saves electricity, and helps build businesses faster, which seems like a win-win for large enterprises and the US government.