Sep
24
2018
--

Salesforce partners with Apple to roll deeper into mobile enterprise markets

Apple and Salesforce are both highly successful, iconic brands, who like to put on a big show when they make product announcements. Today, the two companies announced they were forming a strategic partnership with an emphasis on mobile strategy ahead of Salesforce’s enormous customer conference, Dreamforce, which starts tomorrow in San Francisco.

For Apple, which is has been establishing partnerships with key enterprise brands for the last several years, today’s news is a another big step toward solidifying its enterprise strategy by involving the largest enterprise SaaS vendor in the world.

“We’re forming a strategic partnership with Salesforce to change the way people work and to empower developers of all abilities to build world-class mobile apps,” Susan Prescott, vice president of markets, apps and services at Apple told TechCrunch.

Tim Cook at Apple event on September 12, 2018 Photo: Justin Sullivan/Getty Images

Bret Taylor, president and chief product officer at Salesforce, who came over in the Quip deal a couple of years ago, says working together, the two companies can streamline mobile development for customers. “Every single one of our customers is on mobile. They all want world-class mobile experiences, and this enables us when we’re talking to a customer about their mobile strategy, that we can be in that conversation together,” he explained.

For starters, the partnership is going to involve three main components: The two companies are going to work together to bring in some key iOS features such Siri Shortcuts and integration with Apple’s Business Chat into the Salesforce mobile app. Much like the partnership between Apple and IBM, Apple and Salesforce will also work together to build industry-specific iOS apps on the Salesforce platform.

The companies are also working together on a new mobile SDK built specifically for Swift, Apple’s popular programming language. The plan is to provide a way to build Swift apps for iOS and deploy them natively on Salesforce’s Lightning platform.

The final component involves deeper integration with Trailhead, Salesforce’s education platform. That will involve a new Trailhead Mobile app on IOS as well as adding Swift education courses to the Trailhead catalogue to help drive adoption of the mobile SDK.

While Apple has largely been perceived as a consumer-focused organization, as we saw a shift to  companies encouraging employees to bring their own devices to work over the last six or seven years, Apple has benefited. As that has happened, it has been able to take advantage to sell more products and services and has partnered with a number of other well-known enterprise brands including IBMCiscoSAP and GE along with systems integrators Accenture and Deloitte.

The move gives Salesforce a formidable partner to continue their incredible growth trajectory. Just last year the company passed the $10 billion run rate putting it in rarefied company with some of the most successful software companies in the world. In their most recent earnings call at the end of August, they reported $3.28 billion for the quarter, placing them on a run rate of over $13 billion. Connecting with Apple could help keep that momentum growing.

The two companies will show off the partnership at Dreamforce this week. It’s a deal that has the potential to work out well for both companies, giving Salesforce a more integrated iOS experience and helping Apple increase its reach into the enterprise.

Sep
24
2018
--

Microsoft Azure gets new high-performance storage options

Microsoft Azure is getting a number of new storage options today that mostly focus on use cases where disk performance matters.

The first of these is Azure Ultra SSD Managed Disks, which are now in public preview. Microsoft says that these drives will offer “sub-millisecond latency,” which unsurprisingly makes them ideal for workloads where latency matters.

Earlier this year, Microsoft launched its Premium and Standard SSD Managed Disks offerings for Azure into preview. These ‘ultra’ SSDs represent the next tier up from the Premium SSDs with even lower latency and higher throughput. They’ll offer 160,000 IOPS per second will less than a millisecond of read/write latency. These disks will come in sizes ranging from 4GB to 64TB.

And talking about Standard SSD Managed Disks, this service is now generally available after only three months in preview. To top things off, all of Azure’s storage tiers (Premium and Standard SSD, as well as Standard HDD) now offer 8, 16 and 32 TB storage capacity.

Also new today is Azure Premium files, which is now in preview. This, too, is an SSD-based service. Azure Files itself isn’t new, though. It offers users access to cloud storage using the standard SMB protocol. This new premium offering promises higher throughput and lower latency for these kind of SMB operations.

more Microsoft Ignite 2018 coverage

Sep
24
2018
--

Microsoft hopes enterprises will want to use Cortana

In a world dominated by Alexa and the Google Assistant, Cortana suffers the fate of a perfectly good alternative that nobody uses and everybody forgets about. But Microsoft wouldn’t be Microsoft if it just gave up on its investment in this space, so it’s now launching the Cortana Skills Kit for Enterprise to see if that’s a niche where Cortana can succeed.

This new kit is an end-to-end solution for enterprises that want to build their own skills and agents. Of course, they could have done this before using the existing developer tools. This kit isn’t all that different from those, after all. Microsoft notes that it is designed for deployment inside an organization and represents a new platform for them to build these experiences.

The Skills Kit platform is based on the Microsoft Bot Framework and the Azure Cognitive Services Language Understanding feature.

Overall, this is probably not a bad bet on Microsoft’s part. I can see how some enterprises would want to build their own skills for their employees and customers to access internal data, for example, or to complete routine tasks.

For now, this tool is only available in private preview. No word on when we can expect a wider launch.

more Microsoft Ignite 2018 coverage

Sep
24
2018
--

Microsoft updates its planet-scale Cosmos DB database service

Cosmos DB is undoubtedly one of the most interesting products in Microsoft’s Azure portfolio. It’s a fully managed, globally distributed multi-model database that offers throughput guarantees, a number of different consistency models and high read and write availability guarantees. Now that’s a mouthful, but basically, it means that developers can build a truly global product, write database updates to Cosmos DB and rest assured that every other user across the world will see those updates within 20 milliseconds or so. And to write their applications, they can pretend that Cosmos DB is a SQL- or MongoDB-compatible database, for example.

CosmosDB officially launched in May 2017, though in many ways it’s an evolution of Microsoft’s existing Document DB product, which was far less flexible. Today, a lot of Microsoft’s own products run on CosmosDB, including the Azure Portal itself, as well as Skype, Office 365 and Xbox.

Today, Microsoft is extending Cosmos DB with the launch of its multi-master replication feature into general availability, as well as support for the Cassandra API, giving developers yet another option to bring existing products to CosmosDB, which in this case are those written for Cassandra.

Microsoft now also promises 99.999 percent read and write availability. Previously, it’s read availability promise was 99.99 percent. And while that may not seem like a big difference, it does show that after more of a year of operating Cosmos DB with customers, Microsoft now feels more confident that it’s a highly stable system. In addition, Microsoft is also updating its write latency SLA and now promises less than 10 milliseconds at the 99th percentile.

“If you have write-heavy workloads, spanning multiple geos, and you need this near real-time ingest of your data, this becomes extremely attractive for IoT, web, mobile gaming scenarios,” Microsoft CosmosDB architect and product manager Rimma Nehme told me. She also stressed that she believes Microsoft’s SLA definitions are far more stringent than those of its competitors.

The highlight of the update, though, is multi-master replication. “We believe that we’re really the first operational database out there in the marketplace that runs on such a scale and will enable globally scalable multi-master available to the customers,” Nehme said. “The underlying protocols were designed to be multi-master from the very beginning.”

Why is this such a big deal? With this, developers can designate every region they run Cosmos DB in as a master in its own right, making for a far more scalable system in terms of being able to write updates to the database. There’s no need to first write to a single master node, which may be far away, and then have that node push the update to every other region. Instead, applications can write to the nearest region, and Cosmos DB handles everything from there. If there are conflicts, the user can decide how those should be resolved based on their own needs.

Nehme noted that all of this still plays well with CosmosDB’s existing set of consistency models. If you don’t spend your days thinking about database consistency models, then this may sound arcane, but there’s a whole area of computer science that focuses on little else but how to best handle a scenario where two users virtually simultaneously try to change the same cell in a distributed database.

Unlike other databases, Cosmos DB allows for a variety of consistency models, ranging from strong to eventual, with three intermediary models. And it actually turns out that most CosmosDB users opt for one of those intermediary models.

Interestingly, when I talked to Leslie Lamport, the Turing award winner who developed some of the fundamental concepts behind these consistency models (and the popular LaTeX document preparation system), he wasn’t all that sure that the developers are making the right choice. “I don’t know whether they really understand the consequences or whether their customers are going to be in for some surprises,” he told me. “If they’re smart, they are getting just the amount of consistency that they need. If they’re not smart, it means they’re trying to gain some efficiency and their users might not be happy about that.” He noted that when you give up strong consistency, it’s often hard to understand what exactly is happening.

But strong consistency comes with its drawbacks, too, which leads to higher latency. “For strong consistency there are a certain number of roundtrip message delays that you can’t avoid,” Lamport noted.

The CosmosDB team isn’t just building on some of the fundamental work Lamport did around databases, but it’s also making extensive use of TLA+, the formal specification language Lamport developed in the late 90s. Microsoft, as well as Amazon and others, are now training their engineers to use TLA+ to describe their algorithms mathematically before they implement them in whatever language they prefer.

“Because [CosmosDB is] a massively complicated system, there is no way to ensure the correctness of it because we are humans, and trying to hold all of these failure conditions and the complexity in any one person’s — one engineer’s — head, is impossible,” Microsoft Technical Follow Dharma Shukla noted. “TLA+ is huge in terms of getting the design done correctly, specified and validated using the TLA+ tools even before a single line of code is written. You cover all of those hundreds of thousands of edge cases that can potentially lead to data loss or availability loss, or race conditions that you had never thought about, but that two or three years ago after you have deployed the code can lead to some data corruption for customers. That would be disastrous.”

“Programming languages have a very precise goal, which is to be able to write code. And the thing that I’ve been saying over and over again is that programming is more than just coding,” Lamport added. “It’s not just coding, that’s the easy part of programming. The hard part of programming is getting the algorithms right.”

Lamport also noted that he deliberately chose to make TLA+ look like mathematics, not like another programming languages. “It really forces people to think above the code level,” Lamport noted and added that engineers often tell him that it changes the way they think.

As for those companies that don’t use TLA+ or a similar methodology, Lamport says he’s worried. “I’m really comforted that [Microsoft] is using TLA+ because I don’t see how anyone could do it without using that kind of mathematical thinking — and I worry about what the other systems that we wind up using built by other organizations — I worry about how reliable they are.”

more Microsoft Ignite 2018 coverage

Sep
20
2018
--

AI could help push Neo4j graph database growth

Graph databases have always been useful to help find connections across a vast data set, and it turns out that capability is quite handy in artificial intelligence and machine learning too. Today, Neo4j, the makers of the open source and commercial graph database platform, announced the release of Neo4j 3.5, which has a number of new features aimed specifically at AI and machine learning.

Neo4j founder and CEO Emil Eifrem says he had recognized the connection between AI and machine learning and graph databases for a while, but he says that it has taken some time for the market to catch up to the idea.

“There has been a lot momentum around AI and graphs…Graphs are very fundamental to AI. At the same time we were seeing some early use cases, but not really broad adoption, and that’s what we’re seeing right now,” he explained.

AI graph uses cases. Graphic: Neo4j

To help advance AI uses cases, today’s release includes a new full text search capability, which Eifrem says has been one of the most requested features. This is important because when you are making connections between entities, you have to be able to find all of the examples regardless of how it’s worded — for example, human versus humans versus people.

Part of that was building their own indexing engine to increase indexing speed, which becomes essential with ever more data to process. “Another really important piece of functionality is that we have improved our data ingestion very significantly. We have 5x end-to-end performance improvements when it comes to importing data. And this is really important for connected feature extraction, where obviously, you need a lot of data to be able to train the machine learning,” he said. That also means faster sorting of data too.

Other features in the new release include improvements to the company’s own Cypher database query language and better visualization of the graphs to give more visibility, which is useful for visualizing how machine learning algorithms work, which is known as AI explainability. They also announced support for the Go language and increased security.

Graph databases are growing increasingly important as we look to find connections between data. The most common use case is the knowledge graph, which is what lets us see connections in a huge data sets. Common examples include who we are connected to on a social network like Facebook, or if we bought one item, we might like similar items on an ecommerce site.

Other use cases include connected feature extraction, a common machine learning training techniques that can look at a lot of data and extract the connections, the context and the relationships for a particular piece of data, such as suspects in a criminal case and the people connected to them.

Neo4j has over 300 large enterprise customers including Adobe, Microsoft, Walmart, UBS and NASA. The company launched in 2007 and has raised $80 million. The last round was $36 million in November 2016.

Sep
20
2018
--

MariaDB acquires Clustrix

MariaDB, the company behind the eponymous MySQL drop-in replacement database, today announced that it has acquired Clustrix, which itself is a MySQL drop-in replacement database, but with a focus on scalability. MariaDB will integrate Clustrix’s technology into its own database, which will allow it to offer its users a more scalable database service in the long run.

That by itself would be an interesting development for the popular open source database company. But there’s another angle to this story, too. In addition to the acquisition, MariaDB also today announced that cloud computing company ServiceNow is investing in MariaDB, an investment that helped it get to today’s acquisition. ServiceNow doesn’t typically make investments, though it has made a few acquisitions. It is a very large MariaDB user, though, and it’s exactly the kind of customer that will benefit from the Clustrix acquisition.

MariaDB CEO Michael Howard tells me that ServiceNow current supports about 80,000 instances of MariaDB. With this investment (which is actually an add-on to MariaDB’s 2017 Series C round), ServiceNow’s SVP of Development and Operations Pat Casey will join MariaDB’s board.

Why would MariaDB acquire a company like Clustrix, though? When I asked Howard about the motivation, he noted that he’s now seeing more companies like ServiceNow that are looking at a more scalable way to run MariaDB. Howard noted that it would take years to build a new database engine from the ground up.

“You can hire a lot of smart people individually, but not necessarily have that experience built into their profile,” he said. “So that was important and then to have a jumpstart in relation to this market opportunity — this mandate from our market. It typically takes about nine years, to get a brand new, thorough database technology off the ground. It’s not like a SaaS application where you can get a front-end going in about a year or so.

Howard also stressed that the fact that the teams at Clustrix and MariaDB share the same vocabulary, given that they both work on similar problems and aim to be compatible with MySQL, made this a good fit.

While integrating the Clustrix database technology into MariaDB won’t be trivial, Howard stressed that the database was always built to accommodate external database storage engines. MariaDB will have to make some changes to its APIs to be ready for the clustering features of Clustrix. “It’s not going to be a 1-2-3 effort,” he said. “It’s going to be a heavy-duty effort for us to do this right. But everyone on the team wants to do it because it’s good for the company and our customers.

MariaDB did not disclose the price of the acquisition. Since it was founded in 2006, though, the Y Combinator-incubated Clustrix had raised just under $72 million, though. MariaDB has raised just under $100 million so far, so it’s probably a fair guess that Clustrix didn’t necessarily sell for a large multiple of that.

Sep
19
2018
--

Google’s Cloud Memorystore for Redis is now generally available

After five months in public beta, Google today announced that its Cloud Memorystore for Redis, its fully managed in-memory data store, is now generally available.

The service, which is fully compatible with the Redis protocol, promises to offer sub-millisecond responses for applications that need to use in-memory caching. And because of its compatibility with Redis, developers should be able to easily migrate their applications to this service without making any code changes.

Cloud Memorystore offers two service tiers — a basic one for simple caching and a standard tier for users who need a highly available Redis instance. For the standard tier, Google offers a 99.9 percent availability SLA.

Since it first launched in beta, Google added a few additional capabilities to the service. You can now see your metrics in Stackdriver, for example. Google also added custom IAM roles and improved logging.

As for pricing, Google charges per GB-hour, depending on the service level and capacity you use. You can find the full pricing list here.

Sep
19
2018
--

GitLab raises $100M

GitLab, the developer service that aims to offer a full lifecycle DevOps platform, today announced that it has raised a $100 million Series D funding round at a valuation of $1.1 billion. The round was led by Iconiq.

As GitLab CEO Sid Sijbrandij told me, this round, which brings the company’s total funding to $145.5 million, will help it enable its goal of reaching an IPO by November 2020.

According to Sijbrandij, GitLab’s original plan was to raise a new funding round at a valuation over $1 billion early next year. But since Iconiq came along with an offer that pretty much matched what the company set out to achieve in a few months anyway, the team decided to go ahead and raise the round now. Unsurprisingly, Microsoft’s acquisition of GitHub earlier this year helped to accelerate those plans, too.

“We weren’t planning on fundraising actually. I did block off some time in my calendar next year, starting from February 25th to do the next fundraise,” Sijbrandij said. “Our plan is to IPO in November of 2020 and we anticipated one more fundraise. I think in the current climate, where the macroeconomics are really good and GitHub got acquired, people are seeing that there’s one independent company, one startup left basically in this space. And we saw an opportunity to become best in class in a lot of categories.”

As Sijbrandij stressed, while most people still look at GitLab as a GitHub and Bitbucket competitor (and given the similarity in their names, who wouldn’t?), GitLab wants to be far more than that. It now offers products in nine categories and also sees itself as competing with the likes of VersionOne, Jira, Jenkins, Artifactory, Electric Cloud, Puppet, New Relic and BlackDuck.

“The biggest misunderstanding we’re seeing is that GitLab is an alternative to GitHub and we’ve grown beyond that,” he said. “We are now in nine categories all the way from planning to monitoring.”

Sijbrandij notes that there’s a billion-dollar player in every space that GitLab competes. “But we want to be better,” he said. “And that’s only possible because we are open core, so people co-create these products with us. That being said, there’s still a lot of work on our side, helping to get those contributions over the finish line, making sure performance and quality stay up, establish a consistent user interface. These are things that typically don’t come from the wider community and with this fundraise of $100 million, we will be able to make sure we can sustain that effort in all the different product categories.”

Given this focus, GitLab will invest most of the funding in its engineering efforts to build out its existing products but also to launch new ones. The company plans to launch new features like tracing and log aggregation, for example.

With this very public commitment to an IPO, GitLab is also signaling that it plans to stay independent. That’s very much Sijbrandij’s plan, at least, though he admitted that “there’s always a price” if somebody came along and wanted to acquire the company. He did note that he likes the transparency that comes with being a public company.

“We always managed to be more bullish about the company than the rest of the world,” he said. “But the rest of the world is starting to catch up. This fundraise is a statement that we now have the money to become a public company where we’re not we’re not interested in being acquired. That is what we’re setting out to do.”

Sep
12
2018
--

Nvidia launches the Tesla T4, its fastest data center inferencing platform yet

Nvidia today announced its new GPU for machine learning and inferencing in the data center. The new Tesla T4 GPUs (where the ‘T’ stands for Nvidia’s new Turing architecture) are the successors to the current batch of P4 GPUs that virtually every major cloud computing provider now offers. Google, Nvidia said, will be among the first to bring the new T4 GPUs to its Cloud Platform.

Nvidia argues that the T4s are significantly faster than the P4s. For language inferencing, for example, the T4 is 34 times faster than using a CPU and more than 3.5 times faster than the P4. Peak performance for the P4 is 260 TOPS for 4-bit integer operations and 65 TOPS for floating point operations. The T4 sits on a standard low-profile 75 watt PCI-e card.

What’s most important, though, is that Nvidia designed these chips specifically for AI inferencing. “What makes Tesla T4 such an efficient GPU for inferencing is the new Turing tensor core,” said Ian Buck, Nvidia’s VP and GM of its Tesla data center business. “[Nvidia CEO] Jensen [Huang] already talked about the Tensor core and what it can do for gaming and rendering and for AI, but for inferencing — that’s what it’s designed for.” In total, the chip features 320 Turing Tensor cores and 2,560 CUDA cores.

In addition to the new chip, Nvidia is also launching a refresh of its TensorRT software for optimizing deep learning models. This new version also includes the TensorRT inference server, a fully containerized microservice for data center inferencing that plugs seamlessly into an existing Kubernetes infrastructure.

 

 

Sep
11
2018
--

Twilio’s contact center products just got more analytical with Ytica acquisition

Twilio, a company best known for supplying a communications APIs for developers has a product called Twilio Flex for building sophisticated customer service applications on top of Twilio’s APIs. Today, it announced it was acquiring Ytica (pronounced Why-tica) to provide an operational and analytical layer on top of the customer service solution.

The companies would not discuss the purchase price, but Twilio indicated it does not expect the acquisition to have a material impact on its “results, operations or financial condition.” In other words, it probably didn’t cost much.

Ytica, which is based in Prague, has actually been a partner with Twilio for some time, so coming together in this fashion really made a lot of sense, especially as Twilio has been developing Flex.

Twilio Flex is an app platform for contact centers, which offers a full stack of applications and allows users to deliver customer support over multiple channels, Al Cook, general manager of Twilio Flex explained. “Flex deploys like SaaS, but because it’s built on top of APIs, you can reach in and change how Flex works,” he said. That is very appealing, especially for larger operations looking for a flexible, cloud-based solution without the baggage of on-prem legacy products.

What the product was lacking, however, was a native way to manage customer service representatives from within the application, and understand through analytics and dashboards, how well or poorly the team was doing. Having that ability to measure the effectiveness of the team becomes even more critical the larger the group becomes, and Cook indicated some Flex users are managing enormous groups with 10,000-20,000 employees.

Ytica provides a way to measure the performance of customer service staff, allowing management to monitor and intervene and coach when necessary. “It made so much sense to join together as one team. They have huge experience in the contact center, and a similar philosophy to build something customizable and programmable in the cloud,” Cook said.

While Ytica works with other vendors beyond Twilio, CEO Simon Vostrý says that they will continue to support those customers, even as they join the Twilio family. “We can run Flex and can continue to run this separately. We have customers running on other SaaS platforms, and we will continue to support them,” he said.

The company will remain in Prague and become a Twilio satellite office. All 14 employees are expected to join the Twilio team and Cook says plans are already in the works to expand the Prague team.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com