Dec
19
2018
--

Dataiku raises $101 million for its collaborative data science platform

Dataiku wants to turn buzzwords into an actual service. The company has been focused on data tools for many years, before everybody started talking about big data, data science and machine learning.

And the company just raised $101 million in a round led by Iconiq Capital, with Alven Capital, Battery Ventures, Dawn Capital and FirstMark Capital also participating.

If you’re generating a lot of data, Dataiku helps you find a meaning behind data sets. First, you import your data by connecting Dataiku to your storage system. The platform supports dozens of database formats and sources — Hadoop, NoSQL, images, you name it.

You can then use Dataiku to visualize your data, clean your data set, run some algorithms on your data in order to build a machine learning model, deploy it and more. Dataiku has a visual coding tool, or you can use your own code.

But Dataiku isn’t just a tool for data scientists. Even if you’re a business analyst, you can visualize and extract data from Dataiku directly. And because of its software-as-a-service approach, your entire team of data scientists and data analysts can collaborate on Dataiku.

Clients use it to track churn, detect fraud, forecast demand, optimize lifetime values and more. Customers include General Electric, Sephora, Unilever, KUKA, FOX and BNP Paribas.

With today’s funding round, the company plans to double its staff. The company currently works with 200 people in New York, Paris and London. It plans to open offices in Singapore and Sydney, as well.

Dec
13
2018
--

They scaled YouTube — now they’ll shard everyone with PlanetScale

When the former CTOs of YouTube, Facebook and Dropbox seed fund a database startup, you know there’s something special going on under the hood. Jiten Vaidya and Sugu Sougoumarane saved YouTube from a scalability nightmare by inventing and open-sourcing Vitess, a brilliant relational data storage system. But in the decade since working there, the pair have been inundated with requests from tech companies desperate for help building the operational scaffolding needed to actually integrate Vitess.

So today the pair are revealing their new startup PlanetScale that makes it easy to build multi-cloud databases that handle enormous amounts of information without locking customers into Amazon, Google or Microsoft’s infrastructure. Battle-tested at YouTube, the technology could allow startups to fret less about their backend and focus more on their unique value proposition. “Now they don’t have to reinvent the wheel” Vaidya tells me. “A lot of companies facing this scaling problem end up solving it badly in-house and now there’s a way to solve that problem by using us to help.”

PlanetScale quietly raised a $3 million seed round in April, led by SignalFire and joined by a who’s who of engineering luminaries. They include YouTube co-founder and CTO Steve Chen, Quora CEO and former Facebook CTO Adam D’Angelo, former Dropbox CTO Aditya Agarwal, PayPal and Affirm co-founder Max Levchin, MuleSoft co-founder and CTO Ross Mason, Google director of engineering Parisa Tabriz and Facebook’s first female engineer and South Park Commons founder Ruchi Sanghvi. If anyone could foresee the need for Vitess implementation services, it’s these leaders, who’ve dealt with scaling headaches at tech’s top companies.

But how can a scrappy startup challenge the tech juggernauts for cloud supremacy? First, by actually working with them. The PlanetScale beta that’s now launching lets companies spin up Vitess clusters on its database-as-a-service, their own through a licensing deal, or on AWS with Google Cloud and Microsoft Azure coming shortly. Once these integrations with the tech giants are established, PlanetScale clients can use it as an interface for a multi-cloud setup where they could keep their data master copies on AWS US-West with replicas on Google Cloud in Ireland and elsewhere. That protects companies from becoming dependent on one provider and then getting stuck with price hikes or service problems.

PlanetScale also promises to uphold the principles that undergirded Vitess. “It’s our value that we will keep everything in the query pack completely open source so none of our customers ever have to worry about lock-in” Vaidya says.

PlanetScale co-founders (from left): Jiten Vaidya and Sugu Sougoumarane

Battle-tested, YouTube-approved

He and Sougoumarane met 25 years ago while at Indian Institute of Technology Bombay. Back in 1993 they worked at pioneering database company Informix together before it flamed out. Sougoumarane was eventually hired by Elon Musk as an early engineer for X.com before it got acquired by PayPal, and then left for YouTube. Vaidya was working at Google and the pair were reunited when it bought YouTube and Sougoumarane pulled him on to the team.

“YouTube was growing really quickly and the relationship database they were using with MySQL was sort of falling apart at the seams,” Vaidya recalls. Adding more CPU and memory to the database infra wasn’t cutting it, so the team created Vitess. The horizontal scaling sharding middleware for MySQL let users segment their database to reduce memory usage while still being able to rapidly run operations. YouTube has smoothly ridden that infrastructure to 1.8 billion users ever since.

“Sugu and Mike Solomon invented and made Vitess open source right from the beginning since 2010 because they knew the scaling problem wasn’t just for YouTube, and they’ll be at other companies five or 10 years later trying to solve the same problem,” Vaidya explains. That proved true, and now top apps like Square and HubSpot run entirely on Vitess, with Slack now 30 percent onboard.

Vaidya left YouTube in 2012 and became the lead engineer at Endorse, which got acquired by Dropbox, where he worked for four years. But in the meantime, the engineering community strayed toward MongoDB-style non-relational databases, which Vaidya considers inferior. He sees indexing issues and says that if the system hiccups during an operation, data can become inconsistent — a big problem for banking and commerce apps. “We think horizontally scaled relationship databases are more elegant and are something enterprises really need.

Database legends reunite

Fed up with the engineering heresy, a year ago Vaidya committed to creating PlanetScale. It’s composed of four core offerings: professional training in Vitess, on-demand support for open-source Vitess users, Vitess database-as-a-service on PlanetScale’s servers and software licensing for clients that want to run Vitess on premises or through other cloud providers. It lets companies re-shard their databases on the fly to relocate user data to comply with regulations like GDPR, safely migrate from other systems without major codebase changes, make on-demand changes and run on Kubernetes.

The PlanetScale team

PlanetScale’s customers now include Indonesian e-commerce giant Bukalapak, and it’s helping Booking.com, GitHub and New Relic migrate to open-source Vitess. Growth is suddenly ramping up due to inbound inquiries. Last month around when Square Cash became the No. 1 app, its engineering team published a blog post extolling the virtues of Vitess. Now everyone’s seeking help with Vitess sharding, and PlanetScale is waiting with open arms. “Jiten and Sugu are legends and know firsthand what companies require to be successful in this booming data landscape,” says Ilya Kirnos, founding partner and CTO of SignalFire.

The big cloud providers are trying to adapt to the relational database trend, with Google’s Cloud Spanner and Cloud SQL, and Amazon’s AWS SQL and AWS Aurora. Their huge networks and marketing war chests could pose a threat. But Vaidya insists that while it might be easy to get data into these systems, it can be a pain to get it out. PlanetScale is designed to give them freedom of optionality through its multi-cloud functionality so their eggs aren’t all in one basket.

Finding product market fit is tough enough. Trying to suddenly scale a popular app while also dealing with all the other challenges of growing a company can drive founders crazy. But if it’s good enough for YouTube, startups can trust PlanetScale to make databases one less thing they have to worry about.

Dec
13
2018
--

New tool uses AI to roll back problematic continuous delivery builds automatically

As companies shift to CI/CD (continuous integration/continuous delivery), they face a problem around monitoring and fixing problems in builds that have been deployed. How do you deal with an issue after moving onto the next delivery milestone? Harness, the startup launched last year by AppDynamics founder Jyoti Bansal, wants to fix that with a new tool called 24×7 Service Guard.

The new tool is designed to help companies working with a continuous delivery process by monitoring all of the builds, regardless of when they were launched. What’s more, the company claims that using AI and machine learning, it can dial back a problematic build to one that worked in an automated fashion, freeing developers and operations to keep working without worry.

The company launched last year with a tool called Continuous Verification to verify that a continuous delivery build got deployed. With today’s announcement, Bansal says the company is taking this to another level to help understand what happens after you deploy.

The tool watches every build, even days after deployment, taking advantage of data from tools like AppDynamics, New Relic, Elastic and Splunk, then using AI and machine learning to identify problems and bring them back to a working state without human intervention. What’s more, your team can get a unified view of performance and the quality of every build across all of your monitoring and logging tools.

“People are doing Continuous Delivery and struggling with it. They are also using these AI Ops kinds of products, which are watching things in production, and trying to figure out what’s wrong. What we are doing is we’re bringing the two together and ensuring nothing goes wrong,” Bansal explained.

24×7 Service Guard Console. Screenshot: Harness

He says that he brought this product to market because he saw enterprise companies struggling with CI/CD. He said the early messaging that you should move fast and break things really doesn’t work in enterprise settings. They need tooling that ensures that critical applications will keep running even with continuous builds (however you define that). “How do you enable developers so that they can move fast and make sure the business doesn’t get impacted. I feel that industry was underserved by this [earlier] message,” he said.

While it’s hard for any product to absolutely guarantee up-time, this one is providing tooling for companies who see the value of CI/CD, but are looking for a way to keep their applications up and running, so they aren’t constantly on this deploy/repair treadmill. If it works as described, it could help advance CI/CD, especially for large companies that need to learn to move faster and want assurances that when things break, they can be fixed in an automated fashion.

Dec
11
2018
--

The Cloud Native Computing Foundation adds etcd to its open-source stable

The Cloud Native Computing Foundation (CNCF), the open-source home of projects like Kubernetes and Vitess, today announced that its technical committee has voted to bring a new project on board. That project is etcd, the distributed key-value store that was first developed by CoreOS (now owned by Red Hat, which in turn will soon be owned by IBM). Red Hat has now contributed this project to the CNCF.

Etcd, which is written in Go, is already a major component of many Kubernetes deployments, where it functions as a source of truth for coordinating clusters and managing the state of the system. Other open-source projects that use etcd include Cloud Foundry, and companies that use it in production include Alibaba, ING, Pinterest, Uber, The New York Times and Nordstrom.

“Kubernetes and many other projects like Cloud Foundry depend on etcd for reliable data storage. We’re excited to have etcd join CNCF as an incubation project and look forward to cultivating its community by improving its technical documentation, governance and more,” said Chris Aniszczyk, COO of CNCF, in today’s announcement. “Etcd is a fantastic addition to our community of projects.”

Today, etcd has well over 450 contributors and nine maintainers from eight different companies. The fact that it ended up at the CNCF is only logical, given that the foundation is also the host of Kubernetes. With this, the CNCF now plays host to 17 projects that fall under its “incubated technologies” umbrella. In addition to etcd, these include OpenTracing, Fluentd, Linkerd, gRPC, CoreDNS, containerd, rkt, CNI, Jaeger, Notary, TUF, Vitess, NATS Helm, Rook and Harbor. Kubernetes, Prometheus and Envoy have already graduated from this incubation stage.

That’s a lot of projects for one foundation to manage, but the CNCF community is also extraordinarily large. This week alone about 8,000 developers are converging on Seattle for KubeCon/CloudNativeCon, the organization’s biggest event yet, to talk all things containers. It surely helps that the CNCF has managed to bring competitors like AWS, Microsoft, Google, IBM and Oracle under a single roof to collaboratively work on building these new technologies. There is a risk of losing focus here, though, something that happened to the OpenStack project when it went through a similar growth and hype phase. It’ll be interesting to see how the CNCF will manage this as it brings on more projects (with Istio, the increasingly popular service mesh, being a likely candidate for coming over to the CNCF as well).

Dec
08
2018
--

Why you need a supercomputer to build a house

When the hell did building a house become so complicated?

Don’t let the folks on HGTV fool you. The process of building a home nowadays is incredibly painful. Just applying for the necessary permits can be a soul-crushing undertaking that’ll have you running around the city, filling out useless forms, and waiting in motionless lines under fluorescent lights at City Hall wondering whether you should have just moved back in with your parents.

Consider this an ongoing discussion about Urban Tech, its intersection with regulation, issues of public service, and other complexities that people have full PHDs on. I’m just a bitter, born-and-bred New Yorker trying to figure out why I’ve been stuck in between subway stops for the last 15 minutes, so please reach out with your take on any of these thoughts: @Arman.Tabatabai@techcrunch.com.

And to actually get approval for those permits, your future home will have to satisfy a set of conditions that is a factorial of complex and conflicting federal, state and city building codes, separate sets of fire and energy requirements, and quasi-legal construction standards set by various independent agencies.

It wasn’t always this hard – remember when you’d hear people say “my grandparents built this house with their bare hands?” These proliferating rules have been among the main causes of the rapidly rising cost of housing in America and other developed nations. The good news is that a new generation of startups is identifying and simplifying these thickets of rules, and the future of housing may be determined as much by machine learning as woodworking.

When directions become deterrents

Photo by Bill Oxford via Getty Images

Cities once solely created the building codes that dictate the requirements for almost every aspect of a building’s design, and they structured those guidelines based on local terrain, climates and risks. Over time, townships, states, federally-recognized organizations and independent groups that sprouted from the insurance industry further created their own “model” building codes.

The complexity starts here. The federal codes and independent agency standards are optional for states, who have their own codes which are optional for cities, who have their own codes that are often inconsistent with the state’s and are optional for individual townships. Thus, local building codes are these ever-changing and constantly-swelling mutant books made up of whichever aspects of these different codes local governments choose to mix together. For instance, New York City’s building code is made up of five sections, 76 chapters and 35 appendices, alongside a separate set of 67 updates (The 2014 edition is available as a book for $155, and it makes a great gift for someone you never want to talk to again).

In short: what a shit show.

Because of the hyper-localized and overlapping nature of building codes, a home in one location can be subject to a completely different set of requirements than one elsewhere. So it’s really freaking difficult to even understand what you’re allowed to build, the conditions you need to satisfy, and how to best meet those conditions.

There are certain levels of complexity in housing codes that are hard to avoid. The structural integrity of a home is dependent on everything from walls to erosion and wind-flow. There are countless types of material and technology used in buildings, all of which are constantly evolving.

Thus, each thousand-page codebook from the various federal, state, city, township and independent agencies – all dictating interconnecting, location and structure-dependent needs – lead to an incredibly expansive decision tree that requires an endless set of simulations to fully understand all the options you have to reach compliance, and their respective cost-effectiveness and efficiency.

So homebuilders are often forced to turn to costly consultants or settle on designs that satisfy code but aren’t cost-efficient. And if construction issues cause you to fall short of the outcomes you expected, you could face hefty fines, delays or gigantic cost overruns from redesigns and rebuilds. All these costs flow through the lifecycle of a building, ultimately impacting affordability and access for homeowners and renters.

Startups are helping people crack the code

Photo by Caiaimage/Rafal Rodzoch via Getty Images

Strap on your hard hat – there may be hope for your dream home after all.

The friction, inefficiencies, and pure agony caused by our increasingly convoluted building codes have given rise to a growing set of companies that are helping people make sense of the home-building process by incorporating regulations directly into their software.

Using machine learning, their platforms run advanced scenario-analysis around interweaving building codes and inter-dependent structural variables, allowing users to create compliant designs and regulatory-informed decisions without having to ever encounter the regulations themselves.

For example, the prefab housing startup Cover is helping people figure out what kind of backyard homes they can design and build on their properties based on local zoning and permitting regulations.

Some startups are trying to provide similar services to developers of larger scale buildings as well. Just this past week, I covered the seed round for a startup called Cove.Tool, which analyzes local building energy codes – based on location and project-level characteristics specified by the developer – and spits out the most cost-effective and energy-efficient resource mix that can be built to hit local energy requirements.

And startups aren’t just simplifying the regulatory pains of the housing process through building codes. Envelope is helping developers make sense of our equally tortuous zoning codes, while Cover and companies like Camino are helping steer home and business-owners through arduous and analog permitting processes.

Look, I’m not saying codes are bad. In fact, I think building codes are good and necessary – no one wants to live in a home that might cave in on itself the next time it snows. But I still can’t help but ask myself why the hell does it take AI to figure out how to build a house? Why do we have building codes that take a supercomputer to figure out?

Ultimately, it would probably help to have more standardized building codes that we actually clean-up from time-to-time. More regional standardization would greatly reduce the number of conditional branches that exist. And if there was one set of accepted overarching codes that could still set precise requirements for all components of a building, there would still only be one path of regulations to follow, greatly reducing the knowledge and analysis necessary to efficiently build a home.

But housing’s inherent ties to geography make standardization unlikely. Each region has different land conditions, climates, priorities and political motivations that cause governments to want their own set of rules.

Instead, governments seem to be fine with sidestepping the issues caused by hyper-regional building codes and leaving it up to startups to help people wade through the ridiculousness that paves the home-building process, in the same way Concur aids employee with infuriating corporate expensing policies.

For now, we can count on startups that are unlocking value and making housing more accessible, simpler and cheaper just by making the rules easier to understand. And maybe one day my grandkids can tell their friends how their grandpa built his house with his own supercomputer.

And lastly, some reading while in transit:

Dec
06
2018
--

Contentful raises $33.5M for its headless CMS platform

Contentful, a Berlin- and San Francisco-based startup that provides content management infrastructure for companies like Spotify, Nike, Lyft and others, today announced that it has raised a $33.5 million Series D funding round led by Sapphire Ventures, with participation from OMERS Ventures and Salesforce Ventures, as well as existing investors General Catalyst, Benchmark, Balderton Capital and Hercules. In total, the company has now raised $78.3 million.

It’s been less than a year since the company raised its Series C round and, as Contentful co-founder and CEO Sascha Konietzke told me, the company didn’t really need to raise right now. “We had just raised our last round about a year ago. We still had plenty of cash in our bank account and we didn’t need to raise as of now,” said Konietzke. “But we saw a lot of economic uncertainty, so we thought it might be a good moment in time to recharge. And at the same time, we already had some interesting conversations ongoing with Sapphire [formerly SAP Ventures] and Salesforce. So we saw the opportunity to add more funding and also start getting into a tight relationship with both of these players.”

The original plan for Contentful was to focus almost explicitly on mobile. As it turns out, though, the company’s customers also wanted to use the service to handle its web-based applications and these days, Contentful happily supports both. “What we’re seeing is that everything is becoming an application,” he told me. “We started with native mobile application, but even the websites nowadays are often an application.”

In its early days, Contentful focused only on developers. Now, however, that’s changing, and having these connections to large enterprise players like SAP and Salesforce surely isn’t going to hurt the company as it looks to bring on larger enterprise accounts.

Currently, the company’s focus is very much on Europe and North America, which account for about 80 percent of its customers. For now, Contentful plans to continue to focus on these regions, though it obviously supports customers anywhere in the world.

Contentful only exists as a hosted platform. As of now, the company doesn’t have any plans for offering a self-hosted version, though Konietzke noted that he does occasionally get requests for this.

What the company is planning to do in the near future, though, is to enable more integrations with existing enterprise tools. “Customers are asking for deeper integrations into their enterprise stack,” Konietzke said. “And that’s what we’re beginning to focus on and where we’re building a lot of capabilities around that.” In addition, support for GraphQL and an expanded rich text editing experience is coming up. The company also recently launched a new editing experience.

Dec
04
2018
--

Cove.Tool wants to solve climate change one efficient building at a time

As the fight against climate change heats up, Cove.Tool is looking to help tackle carbon emissions one building at a time.

The Atlanta-based startup provides an automated big-data platform that helps architects, engineers and contractors identify the most cost-effective ways to make buildings compliant with energy efficiency requirements. After raising an initial round earlier this year, the company completed the final close of a $750,000 seed round. Since the initial announcement of the round earlier this month, Urban Us, the early-stage fund focused on companies transforming city life, has joined the syndicate comprised of Tech Square Labs and Knoll Ventures.

Helping firms navigate a growing suite of energy standards and options

Cove.Tool software allows building designers and managers to plug in a variety of building conditions, energy options, and zoning specifications to get to the most cost-effective method of hitting building energy efficiency requirements (Cove.Tool Press Image / Cove.Tool / https://covetool.com).

In the US, the buildings we live and work in contribute more carbon emissions than any other sector. Governments across the country are now looking to improve energy consumption habits by implementing new building codes that set higher energy efficiency requirements for buildings. 

However, figuring out the best ways to meet changing energy standards has become an increasingly difficult task for designers. For one, buildings are subject to differing federal, state and city codes that are all frequently updated and overlaid on one another. Therefore, the specific efficiency requirements for a building can be hard to understand, geographically unique and immensely variable from project to project.

Architects, engineers and contractors also have more options for managing energy consumption than ever before – equipped with tools like connected devices, real-time energy-management software and more-affordable renewable energy resources. And the effectiveness and cost of each resource are also impacted by variables distinct to each project and each location, such as local conditions, resource placement, and factors as specific as the amount of shade a building sees.

With designers and contractors facing countless resource combinations and weightings, Cove.Tool looks to make it easier to identify and implement the most cost-effective and efficient resource bundles that can be used to hit a building’s energy efficiency requirements.

Cove.Tool users begin by specifying a variety of project-specific inputs, which can include a vast amount of extremely granular detail around a building’s use, location, dimensions or otherwise. The software runs the inputs through a set of parametric energy models before spitting out the optimal resource combination under the set parameters.

For example, if a project is located on a site with heavy wind flow in a cold city, the platform might tell you to increase window size and spend on energy efficient wall installations, while reducing spending on HVAC systems. Along with its recommendations, Cove.Tool provides in-depth but fairly easy-to-understand graphical analyses that illustrate various aspects of a building’s energy performance under different scenarios and sensitivities.

Cove.Tool users can input granular project-specifics, such as shading from particular beams and facades, to get precise analyses around a building’s energy performance under different scenarios and sensitivities.

Democratizing building energy modeling

Traditionally, the design process for a building’s energy system can be quite painful for architecture and engineering firms.

An architect would send initial building designs to engineers, who then test out a variety of energy system scenarios over the course a few weeks. By the time the engineers are able to come back with an analysis, the architects have often made significant design changes, which then gets sent back to the engineers, forcing the energy plan to constantly be 1-to-3 months behind the rest of the building. This process can not only lead to less-efficient and more-expensive energy infrastructure, but the hectic back-and-forth can lead to longer project timelines, unexpected construction issues, delays and budget overruns.

Cove.Tool effectively looks to automate the process of “energy modeling.” The energy modeling looks to ease the pains of energy design in the same ways Building Information Modeling (BIM) has transformed architectural design and construction. Just as BIM creates predictive digital simulations that test all the design attributes of a project, energy modeling uses building specs, environmental conditions, and various other parameters to simulate a building’s energy efficiency, costs and footprint.

By using energy modeling, developers can optimize the design of the building’s energy system, adjust plans in real-time, and more effectively manage the construction of a building’s energy infrastructure. However, the expertise needed for energy modeling falls outside the comfort zones of many firms, who often have to outsource the task to expensive consultants.

The frustrations of energy system design and the complexities of energy modeling are ones the Cove.Tool team knows well. Patrick Chopson and Sandeep Ajuha, two of the company’s three co-founders, are former architects that worked as energy modeling consultants when they first began building out the Cove.Tool software.

After seeing their clients’ initial excitement over the ability to quickly analyze millions of combinations and instantly identify the ones that produce cost and energy savings, Patrick and Sandeep teamed up with CTO Daniel Chopson and focused full-time on building out a comprehensive automated solution that would allow firms to run energy modeling analysis without costly consultants, more quickly, and through an interface that would be easy enough for an architectural intern to use.

So far there seems to be serious demand for the product, with the company already boasting an impressive roster of customers that includes several of the country’s largest architecture firms, such as HGA, HKS and Cooper Carry. And the platform has delivered compelling results – for example, one residential developer was able to identify energy solutions that cost $2 million less than the building’s original model. With the funds from its seed round, Cove.Tool plans further enhance its sales effort while continuing to develop additional features for the platform.

Changing decision-making and fighting climate change

The value proposition Cove.Tool hopes to offer is clear – the company wants to make it easier, faster and cheaper for firms to use innovative design processes that help identify the most cost-effective and energy-efficient solutions for their buildings, all while reducing the risks of redesign, delay and budget overruns.

Longer-term, the company hopes that it can help the building industry move towards more innovative project processes and more informed decision-making while making a serious dent in the fight against emissions.

“We want to change the way decisions are made. We want decisions to move away from being just intuition to become more data-driven.” The co-founders told TechCrunch.

“Ultimately we want to help stop climate change one building at a time. Stopping climate change is such a huge undertaking but if we can change the behavior of buildings it can be a bit easier. Architects and engineers are working hard but they need help and we need to change.”

Dec
04
2018
--

Microsoft and Docker team up to make packaging and running cloud-native applications easier

Microsoft and Docker today announced a new joint open-source project, the Cloud Native Application Bundle (CNAB), that aims to make the lifecycle management of cloud-native applications easier. At its core, the CNAB is nothing but a specification that allows developers to declare how an application should be packaged and run. With this, developers can define their resources and then deploy the application to anything from their local workstation to public clouds.

The specification was born inside Microsoft, but as the team talked to Docker, it turns out that the engineers there were working on a similar project. The two decided to combine forces and launch the result as a single open-source project. “About a year ago, we realized we’re both working on the same thing,” Microsoft’s Gabe Monroy told me. “We decided to combine forces and bring it together as an industry standard.”

As part of this, Microsoft is launching its own reference implementation of a CNAB client today. Duffle, as it’s called, allows users to perform all the usual lifecycle steps (install, upgrade, uninstall), create new CNAB bundles and sign them cryptographically. Docker is working on integrating CNAB into its own tools, too.

Microsoft also today launched Visual Studio extension for building and hosting these bundles, as well as an example implementation of a bundle repository server and an Electron installer that lets you install a bundle with the help of a GUI.

Now it’s worth noting that we’re talking about a specification and reference implementations here. There is obviously a huge ecosystem of lifecycle management tools on the market today that all have their own strengths and weaknesses. “We’re not going to be able to unify that tooling,” said Monroy. “I don’t think that’s a feasible goal. But what we can do is we can unify the model around it, specifically the lifecycle management experience as well as the packaging and distribution experience. That’s effectively what Docker has been able to do with the single-workload case.”

Over time, Microsoft and Docker would like for the specification to end up in a vendor-neutral foundation. Which one remains to be seen, though the Open Container Initiative seems like the natural home for a project like this.

Nov
29
2018
--

AWS announces a slew of new Lambda features

AWS launched Lambda in 2015 and with it helped popularize serverless computing. You simply write code (event triggers) and AWS deals with whatever compute, memory and storage you need to make that work. Today at AWS re:Invent in Las Vegas, the company announced several new features to make it more developer friendly, while acknowledging that even while serverless reduced complexity, it still requires more sophisticated tools as it matures

It’s called serverless because you don’t have to worry about the underlying servers. The cloud vendors take care of all that for you, serving whatever resources you need to run your event and no more. It means you no longer have to worry about coding for all your infrastructure and you only pay for the computing you need at any given moment to make the application work.

The way AWS works is that it tends to release something, then builds more functionality on top of a base service as it sees increasing requirements as customers use it. As Amazon CTO Werner Vogels pointed out in his keynote on Thursday, developers debate about tools and everyone has their own idea of what tools they bring to the task every day.

For starters, they decided to please the language folks introducing support for new languages. Those developers who use Ruby can now use Ruby Support for AWS Lambda. “Now it’s possible to write Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default,” Chris Munns from AWS wrote in a blog post introducing the new language support.

If C++ is your thing, AWS announced C++ Lambda Runtime. If neither of those match your programming language tastes, AWS opened it up for just about any language with the new Lambda Runtime API, which Danilo Poccia from AWS described in a blog post as “a simple interface to use any programming language, or a specific language version, for developing your functions.”

AWS didn’t want to stop with languages though. They also recognize that even though Lambda (and serverless in general) is designed to remove a level of complexity for developers, that doesn’t mean that all serverless applications consist of simple event triggers. As developers build more sophisticated serverless apps, they have to bring in system components and compose multiple pieces together, as Vogels explained in his keynote today.

To address this requirement, the company introduced Lambda Layers, which they describe as “a way to centrally manage code and data that is shared across multiple functions.” This could be custom code used by multiple functions or a way to share code used to simplify business logic.

As Lambda matures, developer requirements grow and these announcements and others are part of trying to meet those needs.

more AWS re:Invent 2018 coverage

Nov
29
2018
--

AWS launches a managed Kafka service

Kafka is an open-source tool for handling incoming streams of data. Like virtually all powerful tools, it’s somewhat hard to set up and manage. Today, Amazon’s AWS is making this all a bit easier for its users with the launch of Amazon Managed Streaming for Kafka. That’s a mouthful, but it’s essentially Kafka as a fully managed, highly available service on AWS. It’s now available on AWS as a public preview.

As AWS CTO Werner Vogels noted in his AWS re:Invent keynote, Kafka users traditionally had to do a lot of heavy lifting to set up a cluster on AWS and to ensure that it could scale and handle failures. “It’s a nightmare having to restart all the cluster and the main nodes,” he said. “This is what I would call the traditional heavy lifting that AWS is really good at solving for you.”

It’s interesting to see AWS launch this service, given that it already offers a very similar tool in Kinesis, a tool that also focuses on ingesting streaming data. There are plenty of applications on the market today that already use Kafka, and AWS is clearly interested in giving those users a pathway to either move to a managed Kafka service or to AWS in general.

As with all things AWS, the pricing is a bit complicated, but a basic Kafka instance will start at $0.21 per hour. You’re not likely to just use one instance, so for a somewhat useful setup with three brokers and a good amount of storage and some other fees, you’ll quickly pay well over $500 per month.

more AWS re:Invent 2018 coverage

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com