Jan
31
2019
--

Google’s Cloud Firestore NoSQL database hits general availability

Google today announced that Cloud Firestore, its serverless NoSQL document database for mobile, web and IoT apps, is now generally available. In addition, Google is also introducing a few new features and bringing the service to 10 new regions.

With this launch, Google is giving developers the option to run their databases in a single region. During the beta, developers had to use multi-region instances, and, while that obviously has some advantages with regard to resilience, it’s also more expensive and not every app needs to run in multiple regions.

“Some people don’t need the added reliability and durability of a multi-region application,” Google product manager Dan McGrath told me. “So for them, having a more cost-effective regional instance is very attractive, as well as data locality and being able to place a Cloud Firestore database as close as possible to their user base.”

The new regional instance pricing is up to 50 percent cheaper than the current multi-cloud instance prices. Which solution you pick does influence the SLA guarantee Google gives you, though. While the regional instances are still replicated within multiple zones inside the region, all of the data is still within a limited geographic area. Hence, Google promises 99.999 percent availability for multi-region instances and 99.99 percent availability for regional instances.

And talking about regions, Cloud Firestore is now available in 10 new regions around the world. Firestore launched with a single location when it launched and added two more during the beta. With this, Firestore is now available in 13 locations (including the North America and Europe multi-region offerings). McGrath tells me Google is still in the planning stage for deciding the next phase of locations, but he stressed that the current set provides pretty good coverage across the globe.

Also new in this release is deeper integration with Stackdriver, the Google Cloud monitoring service, which can now monitor read, write and delete operations in near-real time. McGrath also noted that Google plans to add the ability to query documents across collections and increment database values without needing a transaction.

It’s worth noting that while Cloud Firestore falls under the Google Firebase brand, which typically focuses on mobile developers, Firestore offers all of the usual client-side libraries for Compute Engine or Kubernetes Engine applications, too.

“If you’re looking for a more traditional NoSQL document database, then Cloud Firestore gives you a great solution that has all the benefits of not needing to manage the database at all,” McGrath said. “And then, through the Firebase SDK, you can use it as a more comprehensive back-end as a service that takes care of things like authentication for you.”

One of the advantages of Firestore is that it has extensive offline support, which makes it ideal for mobile developers but also IoT solutions. Maybe it’s no surprise, then, that Google is positioning it as a tool for both Google Cloud and Firebase users.

Jan
23
2019
--

AWS launches WorkLink to make accessing mobile intranet sites and web apps easier

If your company uses a VPN and/or a mobile device management service to give you access to its intranet and internal web apps, then you know how annoying those are. AWS today launched a new product, Amazon WorkLink, that promises to make this process significantly easier.

WorkLink is a fully managed service that, for $5 per month and per user, allows IT admins to give employees one-click access to internal sites, no matter whether they run on AWS or not.

After installing WorkLink on their phones, employees can then simply use their favorite browser to surf to an internal website (other solutions often force users to use a sub-par proprietary browser). WorkLink then goes to work, securely requests that site and — and that’s the smart part here — a secure WorkLink container converts the site into an interactive vector graphic and sends it back to the phone. Nothing is stored or cached on the phone and AWS says WorkLink knows nothing about personal device activity either. That also means when a device is lost or stolen, there’s no need to try to wipe it remotely because there’s simply no company data on it.

IT can either use a VPN to connect from an AWS Virtual Private Cloud to on-premise servers or use AWS Direct Connect to bypass a VPN solution. The service works with all SAML 2.0 identity providers (which is the majority of identity services used in the enterprise, including the likes of Okta and Ping Identity), and as a fully managed service, it handles scaling and updates in the background.

“When talking with customers, all of them expressed frustration that their workers don’t have an easy and secure way to access internal content, which means that their employees either waste time or don’t bother trying to access content that would make them more productive,” says Peter Hill, vice president of Productivity Applications at AWS, in today’s announcement. “With Amazon WorkLink, we’re enabling greater workplace productivity for those outside the corporate firewall in a way that IT administrators and security teams are happy with and employees are willing to use.”

WorkLink will work with both Android and iOS, but for the time being, only the iOS app (iOS 12+) is available. For now, it also only works with Safar, with Chrome support coming in the next few weeks. The service is also only available in Europe and North America for now, with additional regions coming later this year.

For the time being, AWS’s cloud archrivals Google and Microsoft don’t offer any services that are quite comparable with WorkLink. Google offers its Cloud Identity-Aware Proxy as a VPN alternative and as part of its BeyondCorp program, though that has a very different focus, while Microsoft offers a number of more traditional mobile device management solutions.

Jan
23
2019
--

Open-source leader Confluent raises $125M on $2.5B valuation

Confluent, the commercial company built on top of the open-source Apache Kafka project, announced a $125 million Series D round this morning on an enormous $2.5 billion valuation.

The round was led by existing investor Sequoia Capital, with participation from Index Ventures and Benchmark, which also participated in previous rounds. Today’s investment brings the total raised to $206 million, according to the company.

The valuation soared from the previous round when the company was valued at $500 million. What’s more, the company’s bookings have scaled along with the valuation.

Graph: Confluent

 

While CEO Jay Kreps wouldn’t comment directly on a future IPO, he hinted that it is something the company is looking to do at some point. “With our growth and momentum so far, and with the latest funding, we are in a very good position to and have a desire to build a strong, independent company,” Kreps told TechCrunch.

Confluent and Kafka have developed a streaming data technology that processes massive amounts of information in real time, something that comes in handy in today’s data-intensive environment. The base streaming database technology was developed at LinkedIn as a means of moving massive amounts of messages. The company decided to open-source that technology in 2011, and Confluent launched as the commercial arm in 2014.

Kreps, writing in a company blog post announcing the funding, said that the events concept encompasses the basic building blocks of businesses. “These events are the orders, sales and customer experiences, that constitute the operation of the business. Databases have long helped to store the current state of the world, but we think this is only half of the story. What is missing are the continually flowing stream of events that represents everything happening in a company, and that can act as the lifeblood of its operation,” he wrote.

Kreps pointed out that as an open-source project, Confluent depends on the community. “This is not something we’re doing alone. Apache Kafka has a massive community of contributors of which we’re just one part,” he wrote.

While the base open-source component remains available for free download, it doesn’t include the additional tooling the company has built to make it easier for enterprises to use Kafka. Recent additions include a managed cloud version of the product and a marketplace, Confluent Hub, for sharing extensions to the platform.

As we watch the company’s valuation soar, it does so against a backdrop of other companies based on open source selling for big bucks in 2018, including IBM buying Red Hat for $34 billion in October and Salesforce acquiring MuleSoft in June for $6.5 billion.

The company’s most recent round was $50 million in March, 2017.

Jan
16
2019
--

AWS launches Backup, a fully managed backup service for AWS

Amazon’s AWS cloud computing service today launched Backup, a new tool that makes it easier for developers on the platform to back up their data from various AWS services and their on-premises apps. Out of the box, the service, which is now available to all developers, lets you set up backup policies for services like Amazon EBS volumes, RDS databases, DynamoDB tables, EFS file systems and AWS Storage Gateway volumes. Support for more services is planned, too. To back up on-premises data, businesses can use the AWS Storage Gateway.

The service allows users to define their various backup policies and retention periods, including the ability to move backups to cold storage (for EFS data) or delete them completely after a certain time. By default, the data is stored in Amazon S3 buckets.

Most of the supported services, except for EFS file systems, already feature the ability to create snapshots. Backup essentially automates that process and creates rules around it, so it’s no surprise that pricing for Backup is the same as for using those snapshot features (with the exception of the file system backup, which will have a per-GB charge). It’s worth noting that you’ll also pay a per-GB fee for restoring data from EFS file systems and DynamoDB backups.

Currently, Backup’s scope is limited to a given AWS region, but the company says that it plans to offer cross-region functionality later this year.

“As the cloud has become the default choice for customers of all sizes, it has attracted two distinct types of builders,” writes Bill Vass, AWS’s VP of Storage, Automation, and Management Services. “Some are tinkerers who want to tweak and fine-tune the full range of AWS services into a desired architecture, and other builders are drawn to the same breadth and depth of functionality in AWS, but are willing to trade some of the service granularity to start at a higher abstraction layer, so they can build even faster. We designed AWS Backup for this second type of builder who has told us that they want one place to go for backups versus having to do it across multiple, individual services.”

Early adopters of AWS Backup are State Street Corporation, Smile Brands and Rackspace, though this is surely a service that will attract its fair share of users as it makes the life of admins quite a bit easier. AWS does have quite a few backup and storage partners, though, who may not be all that excited to see AWS jump into this market, too — though they often offer a wider range of functionality than AWS’s service, including cross-region and offsite backups.

 

Jan
16
2019
--

Nvidia’s T4 GPUs are now available in beta on Google Cloud

Google Cloud today announced that Nvidia’s Turing-based Tesla T4 data center GPUs are now available in beta in its data centers in Brazil, India, Netherlands, Singapore, Tokyo and the United States. Google first announced a private test of these cards in November, but that was a very limited alpha test. All developers can now take these new T4 GPUs for a spin through Google’s Compute Engine service.

The T4, which essentially uses the same processor architecture as Nvidia’s RTX cards for consumers, slots in-between the existing Nvidia V100 and P4 GPUs on the Google Cloud Platform . While the V100 is optimized for machine learning, though, the T4 (as its P4 predecessor) is more of a general-purpose GPU that also turns out to be great for training models and inferencing.

In terms of machine and deep learning performance, the 16GB T4 is significantly slower than the V100, though if you are mostly running inference on the cards, you may actually see a speed boost. Unsurprisingly, using the T4 is also cheaper than the V100, starting at $0.95 per hour compared to $2.48 per hour for the V100, with another discount for using preemptible VMs and Google’s usual sustained use discounts.

Google says that the card’s 16GB memory should easily handle large machine learning models and the ability to run multiple smaller models at the same time. The standard PCI Express 3.0 card also comes with support for Nvidia’s Tensor Cores to accelerate deep learning and Nvidia’s new RTX ray-tracing cores. Performance tops out at 260 TOPS and developers can connect up to four T4 GPUs to a virtual machine.

It’s worth stressing that this is also the first GPU in the Google Cloud lineup that supports Nvidia’s ray-tracing technology. There isn’t a lot of software on the market yet that actually makes use of this technique, which allows you to render more lifelike images in real time, but if you need a virtual workstation with a powerful next-generation graphics card, that’s now an option.

With today’s beta launch of the T4, Google Cloud now offers quite a variety of Nvidia GPUs, including the K80, P4, P100 and V100, all at different price points and with different performance characteristics.

Jan
14
2019
--

Salesforce Commerce Cloud updates keep us shopping with AI-fueled APIs

As people increasingly use their mobile phones and other devices to shop, it has become imperative for vendors to improve the shopping experience, making it as simple as possible, given the small footprint. One way to do that is using artificial intelligence. Today, Salesforce announced some AI-enhanced APIs designed to keep us engaged as shoppers.

For starters, the company wants to keep you shopping. That means providing an intelligent recommendation engine. If you searched for a particular jacket, you might like these similar styles, or this scarf and gloves. That’s fairly basic as shopping experiences go, but Salesforce didn’t stop there. It’s letting developers embed this ability to recommend products in any app, whether that’s maps, social or mobile.

That means shopping recommendations could pop up anywhere developers think it makes sense, like on your maps app. Whether consumers see this as a positive thing, Salesforce says when you add intelligence to the shopping experience, it increases sales anywhere from 7-16 percent, so however you feel about it, it seems to be working.

The company also wants to make it simple to shop. Instead of entering a multi-faceted search, as has been the traditional way of shopping in the past — footwear, men’s, sneakers, red — you can take a picture of a sneaker (or anything you like) and the visual search algorithm should recognize it and make recommendations based on that picture. It reduces data entry for users, which is typically a pain on the mobile device, even if it has been simplified by checkboxes.

Salesforce has also made inventory availability as a service, allowing shoppers to know exactly where the item they want is available in the world. If they want to pick up in-store that day, it shows where the store is on a map and could even embed that into your ridesharing app to indicate exactly where you want to go. The idea is to create this seamless experience between consumer desire and purchase.

Finally, Salesforce has added some goodies to make developers happy, too, including the ability to browse the Salesforce API library and find the ones that make the most sense for what they are creating. This includes code snippets to get started. It may not seem like a big deal, but as companies the size of Salesforce increase their API capabilities (especially with the MuleSoft acquisition), it’s harder to know what’s available. The company has also created a sandboxing capability to let developers experiment and build capabilities with these APIs in a safe way.

The basis of Commerce Cloud is Demandware, the company Salesforce acquired two years ago for $2.8 billion. Salesforce’s intelligence platform is called Einstein. In spite of its attempt to personify the technology, it’s really about bringing artificial intelligence across the Salesforce platform of products, as it has with today’s API announcements.

Jan
09
2019
--

AWS gives open source the middle finger

AWS launched DocumentDB today, a new database offering that is compatible with the MongoDB API. The company describes DocumentDB as a “fast, scalable, and highly available document database that is designed to be compatible with your existing MongoDB applications and tools.” In effect, it’s a hosted drop-in replacement for MongoDB that doesn’t use any MongoDB code.

AWS argues that while MongoDB is great at what it does, its customers have found it hard to build fast and highly available applications on the open-source platform that can scale to multiple terabytes and hundreds of thousands of reads and writes per second. So what the company did was build its own document database, but made it compatible with the Apache 2.0 open source MongoDB 3.6 API.

If you’ve been following the politics of open source over the last few months, you’ll understand that the optics of this aren’t great. It’s also no secret that AWS has long been accused of taking the best open-source projects and re-using and re-branding them without always giving back to those communities.

The wrinkle here is that MongoDB was one of the first companies that aimed to put a stop to this by re-licensing its open-source tools under a new license that explicitly stated that companies that wanted to do this had to buy a commercial license. Since then, others have followed.

“Imitation is the sincerest form of flattery, so it’s not surprising that Amazon would try to capitalize on the popularity and momentum of MongoDB’s document model,” MongoDB CEO and president Dev Ittycheria told us. “However, developers are technically savvy enough to distinguish between the real thing and a poor imitation. MongoDB will continue to outperform any impersonations in the market.”

That’s a pretty feisty comment. Last November, Ittycheria told my colleague Ron Miller that he believed that AWS loved MongoDB because it drives a lot of consumption. In that interview, he also noted that “customers have spent the last five years trying to extricate themselves from another large vendor. The last thing they want to do is replay the same movie.”

MongoDB co-founder and CTO Eliot Horowitz echoed this. “In order to give developers what they want, AWS has been pushed to offer an imitation MongoDB service that is based on the MongoDB code from two years ago,” he said. “Our entire company is focused on one thing — giving developers the best way to work with data with the freedom to run anywhere. Our commitment to that single mission will continue to differentiate the real MongoDB from any imitation products that come along.”

A company spokesperson for MongoDB also highlighted that the 3.6 API that DocumentDB is compatible with is now two years old and misses most of the newest features, including ACID transactions, global clusters and mobile sync.

To be fair, AWS has become more active in open source lately and, in a way, it’s giving developers what they want (and not all developers are happy with MongoDB’s own hosted service). Bypassing MongoDB’s licensing by going for API comparability, given that AWS knows exactly why MongoDB did that, was always going to be a controversial move and won’t endear the company to the open-source community.

Jan
09
2019
--

Baidu Cloud launches its open-source edge computing platform

At CES, the Chinese tech giant Baidu today announced OpenEdge, its open-source edge computing platform. At its core, OpenEdge is the local package component of Baidu’s existing Intelligent Edge (BIE) commercial offering and obviously plays well with that service’s components for managing edge nodes and apps.

Because this is obviously a developer announcement, I’m not sure why Baidu decided to use CES as the venue for this release, but there can be no doubt that China’s major tech firms have become quite comfortable with open source. Companies like Baidu, Alibaba, Tencent and others are often members of the Linux Foundation and its growing stable of projects, for example, and virtually ever major open-source organization now looks to China as its growth market. It’s no surprise, then, that we’re also now seeing a wider range of Chinese companies that open source their own projects.

“Edge computing is a critical component of Baidu’s ABC (AI, Big Data and Cloud Computing) strategy,” says Baidu VP and GM of Baidu Cloud Watson Yin. “By moving the compute closer to the source of the data, it greatly reduces the latency, lowers the bandwidth usage and ultimately brings real-time and immersive experiences to end users. And by providing an open source platform, we have also greatly simplified the process for developers to create their own edge computing applications.”

A company spokesperson tells us that the open-source platform will include features like data collection, message distribution and AI inference, as well as tools for syncing with the cloud.

Baidu also today announced that it has partnered with Intel to launch the BIE-AI-Box and with NXP Semiconductors to launch the BIE-AI-Board. The box is designed for in-vehicle video analysis while the board is small enough for cameras, drones, robots and similar applications.

CES 2019 coverage - TechCrunch

Jan
07
2019
--

GitHub Free users now get unlimited private repositories

If you’re a GitHub user, but you don’t pay, this is a good week. Historically, GitHub always offered free accounts but the caveat was that your code had to be public. To get private repositories, you had to pay. Starting tomorrow, that limitation is gone. Free GitHub users now get unlimited private projects with up to three collaborators.

The amount of collaborators is really the only limitation here and there’s no change to how the service handles public repositories, which can still have unlimited collaborators.

This feels like a sign of goodwill on behalf of Microsoft, which closed its acquisition of GitHub last October, with former Xamarin CEO Nat Friedman taking over as GitHub’s CEO. Some developers were rather nervous about the acquisition (though it feels like most have come to terms with it). It’s also a fair guess to assume that GitHub’s model for monetizing the service is a bit different from Microsoft’s. Microsoft doesn’t need to try to get money from small teams — that’s not where the bulk of its revenue comes from. Instead, the company is mostly interested in getting large enterprises to use the service.

Talking about teams, GitHub also today announced that it is changing the name of the GitHub Developer suite to ‘GitHub Pro.’ The company says it’s doing so in order to “help developers better identify the tools they need.”

But what’s maybe even more important is that GitHub Business Cloud and GitHub Enterprise (now called Enterprise Cloud and Enterprise Server) have become one and are now sold under the ‘GitHub Enterprise’ label and feature per-user pricing.

The announcement of free private repositories probably took some of GitHub’s competitors by surprise, but here is what we heard from GitLab CEO Sid Sijbrandij: “GitHub today announced the launch of free private repositories with up to three collaborators. GitLab has offered unlimited collaborators on private repositories since the beginning.We believe Microsoft is focusing more on generating revenue with Azure and less on charging for DevOps software. At GitLab, we believe in a multi-cloud future where organizations use multiple public cloud platforms.”

Note: this story was scheduled for tomorrow, but due to a broken embargo, we decided to publish today. The updates will go live tomorrow.

Jan
05
2019
--

How Trulia began paying down its technical debt

As every software company knows, over time as code ages and workarounds build on work-arounds, the code base becomes bloated. It becomes ever more difficult to get around the technical debt that you’ve built up over time. It’s really impossible to avoid this phenomenon, but at some point, companies realize that the debt is so great that it’s limiting their ability to build new functionality. That’s precisely what Trulia faced in 2017 when it began a process of paying down that debt and modernizing its architecture.

Trulia is a real estate site founded way back in 2005, an eternity ago in terms of technology. The company went public in 2012 and was acquired by Zillow in 2014 for $3.5 billion, but has continued to operate as an independent brand under the Zillow umbrella. It understood that a lot had changed technologically in the 12 years since its inception when engineering began thinking about this. The team knew it had a humongous, monolithic code base that was inhibiting the ability to update the site.

While they tried to pull out some of the newer functions as services, it didn’t really make the site any more nimble because these services always had to tie back into that monolithic central code base. The development team knew if it was to escape this coding trap, it would take a complete overhaul.

Brainstorming broad change

As you would expect, a process like this doesn’t happen overnight, taking months to plan and implement. It all started back in 2017 when the company held what they called an “Innovation Week” with the entire engineering team. Groups of engineers came up with ideas about how to solve this problem, but the one that got the most attention was one called Project Islands, which involved breaking out the different pieces of the site as individual coding islands that could operate independently of one another.

It sounds simple, but in practice it involved breaking down the entire code base into services. They would use Next.js and React to rebuild the front end and GraphQL, an open source graph database technology to rebuild the back end.

Deep Varma, Trulia’s VP of engineering, pointed out that as a company founded in 2005, the site was built on PHP and MySQL, two popular development technologies from that time. Varma says that whenever his engineers made a change to any part of the site, they needed to do a complete system release. This caused a major bottleneck.

What they really needed to do was move to a completely modern microservices architecture that allowed engineering teams to work independently in a continuous delivery approach without breaking any other team’s code. That’s where the concept of islands came into play.

Islands in the stream

The islands were actually microservices. Each one could communicate to a set of central common services like authentication, A/B testing, the navigation bar, the footer — all of the pieces that every mini code base would need, while allowing the teams building these islands to work independently and not require a huge rebuild every time they added a new element or changed something.

Cousine island. Seychelles. Photo: Martin Harvey/Getty Images

The harsh reality of this kind of overhaul came into focus as the teams realized they had to be writing the new pieces while the old system was still in place and running. In a video the company made describing the effort, one engineer likened it to changing the engine of a 747 in the middle of a flight.

Varma says he didn’t try to do everything at once, as he needed to see if the islands approach would work in practice first. In November 2017, he pulled the first engineering team together, and by January it had built the app shell (the common services piece) and one microservice island. When the proof of concept succeeded, Varma knew they were in business.

Building out the archipelago

It’s one thing to build a single island, but it’s another matter to build a chain of them and that would be the next step. By last April, engineering had shown enough progress that they were able to present the entire idea to senior management and get the go-ahead to move forward with a more complex project.

Photo of Rock Islands, Palau, Micronesia: J.W.Alker/Getty Images

First, it took some work with the Next.js development team to get the development framework to work the way they wanted. Varma said he brought in the Next.js team to work with his engineers. He said that they needed to figure out how to stitch the various islands together and resolve dependencies among the different services. The Next.js team actually changed its development roadmap for Trulia, speeding up delivery of these requirements, understanding that other companies would have similar issues.

By last July, the company released Neighborhoods, the first fully independent island functionality on the site. Recently, it moved off-market properties to islands. Off-market properties, as the name implies, are pages with information about properties that are no longer on the market. Varma says that these pages actually make up a significant portion of the company’s traffic.

While Varma would not say just how much of the site has been moved to islands at this point, he said the goal is to move the majority to the new platform in 2019. All of this shows that a complete overhaul of a complex site doesn’t happen overnight, but Trulia is taking steps to move off the original system it created in 2005 and move to a more modern and flexible architecture it has created with islands. It may not have paid down its technical debt in full in 2018, but it went a long way on laying the foundation to do so.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com