Cloud Computing Weekly Podcast

Our guest on the podcast this week is Ed Featherston, VP, Principal Architect at Cloud Technology Partners.

We discuss how a lot of big players in tech from the past are now gone, and this trend makes us look closely at the big tech companies today whose growth is slowing, such as Oracle and IBM. They seem to have lost their disruptive edge and struggle with the new business models they compete against, but experience tells us they are definitely survivors. They’re making money around a legacy business that we still live with today, but that is diminishing with the widespread adoption of the cloud. If they do move into the cloud, which they have taken steps towards, it doesn’t necessarily make sense because they would be taking sales from their own legacy technology.

IBM is now reinventing how to work within this new type of model that Microsoft and Amazon have built. IBM is trying to play catch up on the growth of the cloud and figuring out how to make money on it. With Watson and IoT, they are doing fascinating stuff in this space which could launch them into a hybrid cloud model with various customers, but there is no question they are struggling.

We look at how AWS is looking to simplify building and deploying apps on the cloud platform with a new service, AWS CodeStar. This service makes it simpler to set up projects by using templates for web applications, web services, and others. Developers can provision projects and resources from coding to testing to deployment. It seems to be yet another service AWS is providing that Microsoft and Google don’t have, which further solidifies its leadership in cloud computing. Amazon is great at targeting clients at all levels, from large enterprises to capturing the minds and hearts of the tech geeks. With AWS CodeStar, they aim to make it easier for developers to build applications on the cloud. 2017 is the year of the cloud land grab. Each vendor is trying to get as many people onto their platforms as possible, and AWS is trying to convert anyone and everyone.

AWS CodeStar was launched in response to enterprises facing the challenges of agile development software processes. The first challenge for new software is often having a lengthy setup process before they can start coding. The ability to pull this stuff out to the cloud and get up and running quickly is a huge strategic advantage, especially considering enterprises cloud take years to set up processes like these.

When an enterprise decides to switch to agile development or DevOps, there is a huge initial infrastructure setup involved to get the ID setup, the code repository set up with right security, and getting the build and deploy systems created. By Amazon offering this in a box, giving end-to-end solutions including integration with JIRA for bugs, that will be huge for saving time and headaches for clients. It would not be a surprise to see the other cloud vendors start to imitate CodeStar now that AWS has raised the bar.

Amazon has in a way become the new IBM. They are now the 800 pound gorilla that others strive to catch up to. When you look at their total capacity, they have close to fourteen times the data capacity of the next five vendors underneath them. It will be hard to catch up with them. Other vendors often claim to have faster growth than AWS, but that is only because they are playing on smaller fields and AWS is so big there is less room to grow.

Microsoft is now offering Cloud Migration Assessment services, which walk clients through an evaluation of resources they currently use to determine what to move to the cloud and what it will cost to do that. AWS offers a similar tool, and it’s clear that both tools will be used to promote their internal products. It may be a useful tool to determine the cost of migrating to the cloud, but it’s important to remember that everybody’s needs for the cloud are different, so enterprises need to focus on what their particular needs and requirements are.

No technology negates the need for good design and planning, and cloud is no exception. The pricing of moving to the cloud is dependent on how your particular IT is structured. Cross-vendor pricing must be considered also. Your particular configuration will depend on your requirements and what vendors you are piecing together. These tools often underestimate the cost of what the migration will be, they would rarely overestimate the cost.

Direct download: Ed_Featherston.mp3
Category:Technology News -- posted at: 8:43pm EDT

Our guest on the podcast this week is Richard Seroter, Senior Director of Product at Pivotal.

We discuss the transition from the popularity of Service-Oriented Architecture (SOA) starting in the early 2000’s to today’s microservices. SOAs were API-based which made it easy to pivot into cloud computing. With SOAs we used to think reuse was a goal, but with microservices it has become much easier to replace everything. With microservices you are often trying to decompose as much as possible to increase speed.

We look at Pivotal and how they help large companies get good at software. They do that with agile methods. Pivotal Labs used to help large companies like eBay, Twitter and Google get ramped up on agile development. There is a lot of money going into disruptive technology today and everybody is at risk of losing their place to someone who does software better. It changes the game for these large enterprises who realize that if they can get their software skills competitive, their advantage of history and supply chain comes back to the forefront. The service experience now starts with software. Pivotal is a trusted advisor to come in and help big companies make these changes.

We also explore updates in Cloud Foundry and Pivotal Cloud Foundry. Last week an update added full Windows support under the cover of Cloud Foundry. Pivotal has been working on not only bringing in the Windows ecosystem, but also supporting more types of workloads, container networking for container communication, and persistent storage. Pivotal is checking the boxes from traditional PaaS to a cloud-native platform which runs anywhere and brings more types of workloads in. There are more types of applications because most of these companies need to improve their software capabilities, but do not necessarily need to get good at infrastructure internally. Offering more types of apps and services in more places to help these enterprises improve their software practice is important. These are all ways to make developers more productive while also simplifying the ops burden, which is not easy. Learn more about Cloud Foundry at the Cloud Foundry Summit in June.

Microsoft recently acquired a small startup, Deis.  Sometimes acquisitions of smaller companies can be more of a mystery than large ones. Deis powers Kubernetes on Azure, so it looks like this acquisition is most likely a mix of an acqui-hire and a purchase of technology relevant to Microsoft as they improve their Kubernetes capabilities. It seems they are doing everything it takes to get as many interesting workloads on Azure as possible. Large companies like Microsoft, AWS, Google, and IBM can’t move fast enough, so getting the technology they need to differentiate themselves will start to happen more and more. It looks like we should expect to see many more acquisitions to come and further consolidation in the industry. 

Last, we look at cloud security and whether we need to encrypt everything going forward. Many major websites already encrypt by default, encryption and multi-factor authentication need to be used everywhere. If we do encrypt everything, it adds latency in how information is moved from place to place and it annoys people to have to do extra steps. But we seem to be moving in this direction quickly. On the cloud, there is no performance latency as we move in this direction anymore. By default, we should encrypt everything from now on. Assume every piece of information from now on is going to be hackable. 


It is not possible to simply lift-and-shift and assume it will be okay anymore because the legacy apps were designed to sit behind a firewall and all the inter-app communication was secure only behind the wall. Instead we’re moving to a model where every component needs to protect itself, encrypt its own data, secure access into it, and authorize the user on the way in. If you’re not doing those things it will be hard to pick up a legacy app and run it in the cloud somewhere because the security wasn’t built for that. Luckily, cloud services make this process easy and most public clouds have encrypted databases, but companies are not always ready to handle that. As platform builders, if we can simplify this, we will get farther with developers and have less data leaks and bad practices.

Direct download: Richard_Seroter.mp3
Category:Technology News -- posted at: 10:07pm EDT

Our guest on the podcast this week is James Markarian, CTO at SnapLogic.

Data integration is going through a Renaissance at the moment just like application integration did a few years ago. The primary driver for these changes is the movement to the cloud. When you turn back the clock, application integrations used to be simple. There were a few ERP systems, and as things started moving to the cloud via SaaS, application integration changed quite a bit. Now we’re seeing analytic applications move to the cloud as well. The biggest force of gravity is the mass movement to the cloud. That is also driving data integration. By definition, we’re now replicating platforms as much as we are moving into them. Enterprises that are moving into the cloud need a parallel data application integration strategy to make use of your data as an asset and to make sure the overall plan will work.

Data is frustrating to deal with. The problems seem easy and the reality is hard. Normalizing data, merging it together so you can get meaningful results out of it, making sure your SLAs are met in terms of freshness and quality are all easy to describe but are frustratingly hard to do. One of the challenges we have for on-premise systems is there has always been a goal to democratize data. The challenge is access. It sounds easy, and the cloud inherently breaks this democratization barrier. There is now a public place where people can go get access to data. That centrality is driving a lot of innovative capabilities we are seeing emerge.

Going from legacy applications to cloud requires a different approach to application and data integration. One starting point is choosing a cloud platform and trying to pick the right one. This can be the toughest decision a customer faces. Within each cloud platform there are also myriad choices, from which query technology to use, to whether to use Hadoop implementations on Azure, and more. There are so many choices and it’s easy to feel like you’ve made the wrong one.

Working in enterprise IT, there is a shift in the view of how data should be managed, accessed, and manipulated. In the early days, it was a developer’s job. IT was the team who knew how to access data and application systems. Self-service pushes some of the responsibility out to the edges. It’s not about wanting to move away from IT, but it’s about empowering those who who have the domain knowledge and will eventually be using the data. How we access these systems is different now. The structure of the data is also different. It used to be row and column-oriented. Nowadays, with REST and JSON, being able to handle the formats and not create new barriers so you are not manhandling the tool into dealing with data in unnatural ways. That will make developer lives easier and make it easier for vendors to add connections to the data.

Databases used to be relatively simple. These days it’s often special-purpose databases such as in-memory, object-based, hierarchical, and relational databases. In some ways this makes things more complex, but with layers of abstraction it can also make things easier. When we only had relational, you had to make data look relational whether it was meant to or not. Now you do not need to force anything to fit different data sets. 

SnapLogic is beginning to leverage AI and Machine Learning. Their focus is on making computers work for you instead of making you work for computers.

Direct download: James_Markarian.mp3
Category:Technology News -- posted at: 11:23pm EDT

Our guest on the podcast this week is Kelsey Hightower, Developer Advocate at Google Cloud Platform.

We discuss the birth of Kubernetes at Google and how it started as part of Google’s own best practices and infrastructure, then was created into an open-source project and shared externally. It was a brand new code-base, but built based on Borg, which was an internal system at Google for over 10 years. That is why Kubernetes has a lot of design aspects of a more mature project. It is a combination of Google’s past work on containers and the new container ecosystem related to Docker so that Kubernetes can work outside of Google in any environment. Anyone can get involved and start contributing to the system. There is no enterprise version of Kubernetes, there is just Kubernetes and the different vendors and providers add value to it.

We look at the announcements from Google NEXT. There were no Kubernetes surprises because it is developed in the open, so everything you want to know about it is actively tracked on GitHub. Instead, at Google NEXT, they tried to add clarity about why Google is working on specific features like dynamic storage provisioning and custom schedulers. It’s clear Google is evolving into the enterprise cloud space and they are committed to staying ahead of the curve. For Google Cloud, that means opening up some of its core technologies such as TensorFlow and Google Cloud Spanner so that enterprises can begin to consume Google technology.

We look at Amazon’s recent decision to allow customers with Reserved Instance contracts to subdivide some of their Linux and UNIX virtual machine instances and still keep their capacity discounts. This decision was made so that Amazon could keep up with Google on flexible pricing. These types of changes are evidence of a healthy marketplace. A lot of times people try to mimic the spending habits of traditional enterprise IT which require a three-year roadmap, which is the opposite of what cloud should be. Cloud should be a dynamic world where you can scale-up and scale-down, which is why discounts should kick in based on behavior and actual usage. It’s also why Google has per-minute billing. In this way the pricing structure matches the dynamic needs of cloud computing.

Last, we discuss how small to medium-sized businesses in India can get subsidies for cloud adoption, while the US wants to tax its usage. This may be an opportunity for the US government to offer tax breaks for cloud adoption to incentivize change so that we can use these resources more positively. Some enterprise clients view cloud computing as a complex expensive endeavor. In essence, they’re right, they have archaic infrastructures and it takes a lot of work to move them to cloud, so some of the work it takes can remove the incentive to make the switch. In order to get them over the hump, they need to understand the cost savings, but it seems like a step in the right direction to provide tax incentives much like buying a Tesla can be subsidized. That should at least send the right message to companies in the US. Cloud adoption reduces data center space, saves electricity, and helps build businesses faster, which seems like a win-win for large enterprises and the US government.

Direct download: Kelsey_Hightower.mp3
Category:Technology News -- posted at: 6:16pm EDT

Our guest on the podcast this week is Ray Young, VP at Cloud Technology Partners.

We discuss the AWS outage and how it should not come as such a surprise because cloud provider outages should be expected from time to time. For a lot of companies, the outage was a wake up call not to move away from cloud, but that they need to make the right architectural decisions about the cloud to prevent these sorts of outages. worrying about outages is not the answer – it’s all about management, monitoring, and the ability to have self-healing capabilities. Cloud providers will have outages from time to time, and your enterprise’s cloud architecture will make the difference.

We also discuss digital innovation in the cloud. There are many ‘me too’ players jumping into the cloud, following others with the same solutions. Not a lot of people are thinking out of the box and doing new, innovative things. For instance, it seems like AI as an industry is not innovating anymore, just implementing the same technology to solve similar problems.

Traditionally, when IT has talked about cloud adoption the focus has been on how to shift from data center mindset to infrastructure as code. But more and more, there is a real driver for cloud adoption to be innovative. The primary way cloud makes innovators is by allowing them to stay relevant and survive in today’s technology world. There is also a shift from loyalty of a brand based on the merits of a brand versus brand loyalty based on customer experience and engagement. The cloud is where innovation and agility is happening on customer engagements. Customers are now making investments in the cloud for cost savings as well.

Large enterprises now see their competition as small, nimble, emerging startups. These startups are able to mature their capabilities through their investment in cloud, SaaS, and custom-builds on the cloud to be seen as an equal in the marketplace. For large enterprises trying to stay relevant in a marketplace of startups who can move faster than they can, cloud is the only way to equalize the playing field.

Direct download: Ray_Young.mp3
Category:Technology News -- posted at: 12:22am EDT

Our guest on the podcast this week is Scott Udell, VP of IoT Solutions at Cloud Technology Partners.

We discuss containerization orchestration technologies like Kubernetes, and the healthcare industry’s complex relationship with cloud computing.

We look at the reasons to use Kubernetes as a containerization orchestration tool. Kubernetes represents about 80% of the orchestration tools that exist. The market position is known, skill-sets exist for it, it can scale, and it can provide a one-stop shop to do something effective with containers. In all, companies are not doing a lot of containers today, it is still a small part of the cloud, but it will continue to expand rapidly.

We also look at the healthcare industry’s complex relationship with the cloud.  Healthcare is behind the curve in terms of how they’re leveraging technology. The amount of automation that can occur and good that can be done with technology in the healthcare industry is immense. They can move a lot faster with moves to the cloud.

 

Direct download: Scott_Udell.mp3
Category:Technology News -- posted at: 11:10pm EDT

We discuss RightScale’s State of the Cloud report analyzing trends in the cloud. RightScale helps customers adopt cloud by helping them with a cloud management and optimization.

This is the sixth year of the report so we can start to see trends over time now and there were a few interesting takeaways this year. In the report, RightScale asks two big questions for enterprises. First is about cloud strategy and what their intention is on cloud – to use private, public, or combinations of those. Second, they are asked about what they use today for private or public clouds. From a strategy point of view, people are still focused on multi-cloud with a special focus on hybrid cloud. In strategy, there was a shift away from private-only strategies. Fewer people were saying they plan to use only private cloud or multiple private clouds as their strategies. On adoption, there was a slight drop in people who are already using private cloud from 77% last year down to 72% this year. This may indicate companies who had tried to build their own private cloud with Openstack, and are now backing off from that strategy.

The survey found that the average company leverages about four different cloud vendors. This a result of a combination of acquisitions of companies that use different cloud providers and a strategy to leverage different cloud providers. Rightscale asked people for a list of public and private clouds they are running applications on (focused on IaaS and PaaS, not SaaS) and whether they’re experimenting with particular public and private clouds. They found that among people that are using at least one public cloud, they’re running applications in 1.8 public clouds. They are typically experimenting with another 1.8 clouds. Even if they are not using one of the big cloud vendors, they are often at least experimenting with it. The ones that are adopting private cloud are reporting about 2.3 different private clouds.  

For top challenges this year there was a three-way tie between security, spend and skills (access to skilled resources). Last year skills was highest and it has dropped a bit this year. The people in IT who are concerned about security has been declining each year. Among enterprises, in 2017 over 35% rated cloud security as a significant challenge, and six years ago that number was about 10% higher. We have now reached a tipping point where people realize that when done right, cloud can be as secure if not more secure than a traditional data center.

As people adopt public cloud, the cost has been increasing and companies are starting to realize they are inefficient with their spend. On average companies believe they are wasting 30% of their cloud spend. RightScale has found that 30-45% or more is typically what companies are wasting on their cloud spend. The survey found that the more mature a cloud instance is, the more important spend becomes.

This year Docker has moved into first place in the list of tools RightScale researches, and while all tools had an increase in usage, Chef and Puppet had a decrease in usage. The survey specifically focuses on configuration management tools and container tools. Docker usage moved from 13% in 2015 to 27% in 2016 to 35% this year, while Chef and Puppet each dropped about 4% this year. The other big increase seen this year was in Kubernetes, which doubled from 7% last year to 14% this year and seems to be in the lead for scheduling and orchestration tools.

There is an early trend RightScale noticed that people are starting to use Docker to take advantage of the temporary instances from the cloud providers such as AWS Spot or Google Preemptibles. For people looking to use those, which can mean 70-90% savings on demand, they need the ability to be very portable when they lose their temporary instances, so using Docker along with a container as a service can be helpful in saving those costs.


We look at predictions for next year’s State of the Cloud report. Private cloud will likely continue to be under pressure, though we may see a slight uptick with VMware on AWS. It is likely that Docker will continue to grow and that the cost of the cloud will continue to be an ongoing challenge for enterprises.

Direct download: Kim_Weins.mp3
Category:Technology News -- posted at: 8:48pm EDT

Our guest on the podcast this week is Joey Jablonski, VP, Principal Architect at Cloud Technology Partners.

We discuss serverless computing and what it means for enterprises. We compare different serverless computing platforms, look at pricing models, and discuss how to determine which applications are right for serverless computing.

Serverless computing is a manifestation of the idea that we should focus on our applications and focus on the code that makes them efficient in providing high-levels of functionality. It brings enterprises away from care, feeding, and operations that are low-calorie work. At this point all major cloud vendors have announced some implementation of serverless computing that allows developers to take a section of code, deploy it on their platform, and have it execute for either very short periods of time, or very long periods of time. In a way, developers do not need to worry about operating systems, connectivity, installing patches, upgrades, or any of the operational pieces that go with having a virtual machine or a physical server.

Serverless computing seems like something that should have been built into cloud computing from the beginning. With IaaS, we try to mimic what’s in the data center. They require most of the same upkeep as a regular server. Serverless computing abstracts us from having to deal with these details and allows us to deal with the brilliance of development instead.

It’s likely that if serverless computing had emerged in 2006, it would not have had an easy time being adopted. In early days of cloud computing, enterprises were trying to break data center habits they had built such as change management, control, and approvals that took a long time. Early services on cloud were focused on getting out of the data center, and allowed people to start the transition. If we had serverless technology at that time, it’s possible people would not have used it because it was too big of a leap to transition their enterprises and skill sets all at once. We’re at the time now where people have adopted the cloud and are comfortable with the concept of not having physical access to the applications and systems. We are now okay with not being able to touch the servers. Now people are realizing the amount of time that goes into patching systems and monitoring systems and that it is low-calorie work and doesn’t contribute to being able to build features and capabilities. We’re at that point in the adoption curve where people have adopted the cloud and shed those data center habits, and now they want to shed those operating system and server habits  to make their operations more efficient. This is where serverless computing comes in.

To be clear, the servers are still there, we are just abstracting ourselves from having to deal with the operations and details. This allows us to focus on building the best apps we can without needing to worry about underlying infrastructures.

There are worries that serverless computing may further lock users into one vendor, but so far it doesn’t seem to be the case when looking at the cloud providers. Many of the capabilities they provide that are serverless support a standard of languages (java, javascript, PHP) that are common across all major cloud providers. So if built it in one of these highly portable languages, the application can move with minimal changes. Also, there is always a value to using features that are specific to a cloud provider – while it does get you closer to what people consider vendor lock-in, it comes with a lot of performance, security, and functionality advantages. It’s about finding the right balance of using languages you execute in these serverless environments that are portable across vendors, but also picking those capabilities each platform provides that give you a unique advantage when building applications.

Security in serverless computing takes a certain level of engagement with the platform vendor to fully understand. Older models of building in network layers of protection don’t apply as much in the serverless world. With serverless computing, we look at the cloud platform (Azure, Google, AWS) to provide network-level security. It shifts the responsibility to enterprises to focus deeper on application-level security (log monitoring, databases, data stores) and to look for anomalous behaviors at the application-level.

In terms of what applications are best for serverless computing, consider it for net new based systems. Traditional lift-and-shift environments would typically go to virtual CPUs and storage devices that sit on the cloud. There are a variety of interesting startups looking at ways to automatically refactor code to move it from an OS-based environment to a serverless-based environment. As applications and companies continue to migrate to the cloud at an accelerated rate, we will see more of these migration tools to make the transition more simple and enable adopters of the cloud to use more innovative services rather than moving applications as they are.

We look at the tradeoffs between Amazon Lambda and Azure Functions.

  • Amazon Lambda: For maturity of the service, AWS wins. Lambda has been out for over two years now, and has continued to innovate and release new features supporting new languages. They have also built up a skills-base of people who are comfortable deploying applications with Lambda. AWS has done a great job of getting services out the door so people can use them quickly and provide feedback. AWS takes a simplified approach to serverless pricing.
  • Azure Functions: This is also a strong platform, with a broader base of languages supported and full support of the Microsoft ecosystem. For environments that are Microsoft-biased today (i.e. C# or .NET) there are many reasons to go to the Azure platform because of their tight integrations with those language and development frameworks. Azure pricing is more complex with different pre-buy and discount models. 

In addition, Google is just beginning to play in the serverless computing space with its alpha version of Google Cloud Functions.
It’s important for enterprises not to do too many transformations at the same time. Take steps one at a time that are comfortable for the organization. First, move applications relatively intact to cloud for the ability to deploy infrastructures quickly and get elasticity and a better pricing model. Then, iterate by moving to a serverless environment to improve operations dramatically. This allows the organization to keep up with the technology changes as it matures.

Direct download: Joey_Jablonksi.mp3
Category:Technology News -- posted at: 4:00pm EDT

Our guest on the podcast this week is Randy Bias, VP, Technology & Strategy at Juniper Networks.

We discuss Juniper Networks and their acquisition of Contrail, Oracle’s vendor lock-in and new pricing changes, and why Kubernetes has pulled ahead of Docker in the container world. Juniper Networks is expanding their cloud capability with two recent acquisitions: Contrail and AppFormix, a software-defined networking play and a cloud monitoring software. 

For Randy, containers are the most exciting part of cloud computing right now. The Openstack movement is slowing down and there is fatigue because people are starting to realize that public cloud is going to win. Private infrastructures that still do exist (such as VMWare) come with too much complexity from set-up to organizing and managing. People are deciding to leapfrog past those now and going straight to containers. They want to pair containers with things like Contrail because the combination allows for a robust infrastructure that can run on both private and public cloud. An example of this is a recent Riot Games blogpost about streaming video games and shows how they combine container orchestration systems with Contrail to give themselves a true hybrid cloud solution at the container level. This means developers do not need to worry about what infrastructure they are running on.

In the news recently, Oracle doubled their license fees to run in AWS. As enterprises are migrating to the cloud, they have a lot of Oracle software currently running in their enterprise and are looking to bring licenses and run Oracle in the cloud, so now they’re being asked to pay more for that migration. This could lead to acceleration of companies leaving Oracle because of the higher prices. People are fed up with proprietary software, licensing, and vendor lock-in. Ten years ago, there weren’t a lot of alternatives to Oracle for relational databases. Now there are lots from Aurora, and RDS and Redshift on Amazon. The problem is, a typical enterprise built a lot of procedures and triggers in the applications that run in the Oracle database ten years ago. Now they either have to ditch all of that or rewrite it, which can be risky, or they can pay more for Oracle licenses and stay with it.

Oracle does not have a good cloud play yet, though they are trying. They do still need to upkeep the legacy databases for enterprises now as the transition occurs. Perhaps if oracle could remain reasonable on pricing, they would have more of a chance of surviving, but it seems the recent news makes it easier for customers to decide to leave. They may need to start skating further ahead of the puck soon if they want to weather the cloud disruption.

We also discuss Docker and why Kubernetes has taken the lead for container software in recent years. Docker took off because it was the “Easy Button” for application developers who did not want to learn Chef or Puppet. Then they tried to each Infrastructure teams who did not know what containers were, so they tried to add complexity to make containers look like nextgen virtual machines. That is not how app developers viewed them, and now Kubernetes has become more preferred. Most Openstack startups have bet on Kubernetes. Docker has built a big platform that has not found its killer app yet and are no longer able to take advantage over the dominance they once had. Docker acquired a lot of companies and linked together a lot of different technologies along the way, always adding layers of complexity. Kubernetes is in front because it has simplified everything and that is what developers want right now.  

For an enterprise who is moving to Docker, Randy provides tips and warnings.

  1. Work with a container vendor that is thinking about the future and making it easy on their app developers.
  2. View containers more as application-centric than infrastructure-centric.
  3. Containers by themselves are not going to solve the problem, they need to work with a set of other related services.

It won’t be easy, and be wary of anyone who tells you all the problems with containers are completely solved. Use DevOps and have an application-centric model, so you can focus on velocity.

News Covered

The Register: Oracle effectively doubles licence fees to run its stuff in AWS

Direct download: Randy_Bias_-_Juniper.mp3
Category:Technology News -- posted at: 9:33am EDT

Our guest on the podcast this week is Friederike Schüür, Data Science, Research, and Technical Advisor at Fast Forward Labs.

We discuss how Fast Forward Labs applies the machine learning algorithms of academia to the business world to impact industries. They help clients leverage what they’ve uncovered in their research to build prototypes and demonstrate the potential of new algorithms. Companies are starting to use neural networks for things like image classification, natural language, text summarization, and more. Though it is not a new technique, it is new to the business world and is now available for companies to use. One example of a neural network use-case is to take a long article and automatically cut it down to the five sentences that capture the article’s essence. Artificial Intelligence has been around for more than 30 years, but what has changed now is that it is much easier to store data, and deep learning requires a large amount of data to be effective. Also, the cloud infrastructure makes machine learning possible today because it opens up the use of these algorithms to large companies and even small startups. AI used to require more horsepower, physical space, and cost more to implement than it does today.

Fast Forward Lab clients often struggle to identify the right problem for machine learning. For example, there is a lot of hype around conversational agents, or chat bots, today to replace human agents with AI. But early entrants have found that these bots do not get used frequently by customers, which turns out ot be a user experience problem and not something that can be fixed with machine learning.

The real promise of machine learning is that it can help us with repetitive tasks inside the company. Look in your organization for places where a similar decision or action needs to be made over and over and that is where machine learning could be used.

We discuss the different options for tapping into machine learning algorithms from AWS to Google TensorFlow and open-source tools to combine with them. The most important thing to get started is to determine what clients want to start with defining what the business is trying to achieve. Next it’s important to determine exactly what data is available, and only then to develop a machine learning strategy. Starting simple is important, and only adding complexity when it is necessary.

 

Direct download: Friederike_Schuur.mp3
Category:Technology News -- posted at: 3:01pm EDT