Oct
04
2018
--

GitHub gets a new and improved Jira Software Cloud integration

Atlassian’s Jira has become a standard for managing large software projects in many companies. Many of those same companies also use GitHub as their source code repository and, unsurprisingly, there has long been an official way to integrate the two. That old way, however, was often slow, limited in its capabilities and unable to cope with the large code bases that many enterprises now manage on GitHub .

Almost as if to prove that GitHub remains committed to an open ecosystem, even after the Microsoft acquisition, the company today announced a new and improved integration between the two products.

“Working with Atlassian on the Jira integration was really important for us,” GitHub’s director of ecosystem engineering Kyle Daigle told me ahead of the announcement. “Because we want to make sure that our developer customers are getting the best experience of our open platform that they can have, regardless of what tools they use.”

So a couple of months ago, the team decided to build its own Jira integration from the ground up, and it’s committed to maintaining and improving it over time. As Daigle noted, the improvements here include better performance and a better user experience.

The new integration now also makes it easier to view all the pull requests, commits and branches from GitHub that are associated with a Jira issue, search for issues based on information from GitHub and see the status of the development work right in Jira, too. And because changes in GitHub trigger an update to Jira, too, that data should remain up to date at all times.

The old Jira integration over the so-called Jira DVCS connector will be deprecated and GitHub will start prompting existing users to do the upgrade over the next few weeks. The new integration is now a GitHub app, so that also comes with all of the security features the platform has to offer.

Sep
24
2018
--

Microsoft wants to put your data in a box

AWS has its Snowball (and Snowmobile truck), Google Cloud has its data transfer appliance and Microsoft has its Azure Data Box. All of these are physical appliances that allow enterprises to ship lots of data to the cloud by uploading it into these machines and then shipping them to the cloud. Microsoft’s Azure Data Box launched into preview about a year ago and today, the company is announcing a number of updates and adding a few new boxes, too.

First of all, the standard 50-pound, 100-terabyte Data Box is now generally available. If you’ve got a lot of data to transfer to the cloud — or maybe collect a lot of offline data — then FedEx will happily pick this one up and Microsoft will upload the data to Azure and charge you for your storage allotment.

If you’ve got a lot more data, though, then Microsoft now also offers the Azure Data Box Heavy. This new box, which is now in preview, can hold up to one petabyte of data. Microsoft did not say how heavy the Data Box Heavy is, though.

Also new is the Azure Data Box Edge, which is now also in preview. In many ways, this is the most interesting of the additions since it goes well beyond transporting data. As the name implies, Data Box Edge is meant for edge deployments where a company collects data. What makes this version stand out is that it’s basically a small data center rack that lets you process data as it comes in. It even includes an FPGA to run AI algorithms at the edge.

Using this box, enterprises can collect the data, transform and analyze it on the box, and then send it to Azure over the network (and not in a truck). Using this, users can cut back on bandwidth cost and don’t have to send all of their data to the cloud for processing.

Also part of the same Data Box family is the Data Box Gateway. This is a virtual appliance, however, that runs on Hyper-V and VMWare and lets users create a data transfer gateway for importing data in Azure. That’s not quite as interesting as a hardware appliance but useful nonetheless.

more Microsoft Ignite 2018 coverage

Sep
19
2018
--

Google’s Cloud Memorystore for Redis is now generally available

After five months in public beta, Google today announced that its Cloud Memorystore for Redis, its fully managed in-memory data store, is now generally available.

The service, which is fully compatible with the Redis protocol, promises to offer sub-millisecond responses for applications that need to use in-memory caching. And because of its compatibility with Redis, developers should be able to easily migrate their applications to this service without making any code changes.

Cloud Memorystore offers two service tiers — a basic one for simple caching and a standard tier for users who need a highly available Redis instance. For the standard tier, Google offers a 99.9 percent availability SLA.

Since it first launched in beta, Google added a few additional capabilities to the service. You can now see your metrics in Stackdriver, for example. Google also added custom IAM roles and improved logging.

As for pricing, Google charges per GB-hour, depending on the service level and capacity you use. You can find the full pricing list here.

Sep
19
2018
--

GitLab raises $100M

GitLab, the developer service that aims to offer a full lifecycle DevOps platform, today announced that it has raised a $100 million Series D funding round at a valuation of $1.1 billion. The round was led by Iconiq.

As GitLab CEO Sid Sijbrandij told me, this round, which brings the company’s total funding to $145.5 million, will help it enable its goal of reaching an IPO by November 2020.

According to Sijbrandij, GitLab’s original plan was to raise a new funding round at a valuation over $1 billion early next year. But since Iconiq came along with an offer that pretty much matched what the company set out to achieve in a few months anyway, the team decided to go ahead and raise the round now. Unsurprisingly, Microsoft’s acquisition of GitHub earlier this year helped to accelerate those plans, too.

“We weren’t planning on fundraising actually. I did block off some time in my calendar next year, starting from February 25th to do the next fundraise,” Sijbrandij said. “Our plan is to IPO in November of 2020 and we anticipated one more fundraise. I think in the current climate, where the macroeconomics are really good and GitHub got acquired, people are seeing that there’s one independent company, one startup left basically in this space. And we saw an opportunity to become best in class in a lot of categories.”

As Sijbrandij stressed, while most people still look at GitLab as a GitHub and Bitbucket competitor (and given the similarity in their names, who wouldn’t?), GitLab wants to be far more than that. It now offers products in nine categories and also sees itself as competing with the likes of VersionOne, Jira, Jenkins, Artifactory, Electric Cloud, Puppet, New Relic and BlackDuck.

“The biggest misunderstanding we’re seeing is that GitLab is an alternative to GitHub and we’ve grown beyond that,” he said. “We are now in nine categories all the way from planning to monitoring.”

Sijbrandij notes that there’s a billion-dollar player in every space that GitLab competes. “But we want to be better,” he said. “And that’s only possible because we are open core, so people co-create these products with us. That being said, there’s still a lot of work on our side, helping to get those contributions over the finish line, making sure performance and quality stay up, establish a consistent user interface. These are things that typically don’t come from the wider community and with this fundraise of $100 million, we will be able to make sure we can sustain that effort in all the different product categories.”

Given this focus, GitLab will invest most of the funding in its engineering efforts to build out its existing products but also to launch new ones. The company plans to launch new features like tracing and log aggregation, for example.

With this very public commitment to an IPO, GitLab is also signaling that it plans to stay independent. That’s very much Sijbrandij’s plan, at least, though he admitted that “there’s always a price” if somebody came along and wanted to acquire the company. He did note that he likes the transparency that comes with being a public company.

“We always managed to be more bullish about the company than the rest of the world,” he said. “But the rest of the world is starting to catch up. This fundraise is a statement that we now have the money to become a public company where we’re not we’re not interested in being acquired. That is what we’re setting out to do.”

Sep
06
2018
--

PagerDuty raises $90M to wake up more engineers in the middle of the night

PagerDuty, the popular service that helps businesses monitor their tech stacks, manage incidents and alert engineers when things go sideways, today announced that it has raised a $90 million Series D round at a valuation of $1.3 billion. With this, PagerDuty, which was founded in 2009, has now raised well over $170 million.

The round was led by T. Rowe Price Associates and Wellington Management . Accel, Andreessen Horowitz and Bessemer Venture Partners participated. Given the leads in this round, chances are that PagerDuty is gearing up for an IPO.

“This capital infusion allows us to continue our investments in innovation that leverages artificial intelligence and machine learning, enabling us to help our customers transform their companies and delight their customers,” said Jennifer Tejada, CEO at PagerDuty in today’s announcement. “From a business standpoint, we can strengthen our investment in and development of our people, our most valuable asset, as we scale our operations globally. We’re well positioned to make the lives of digital workers better by elevating work to the outcomes that matter.”

Currently PagerDuty users include the likes of GE, Capital One, IBM, Spotify and virtually every other software company you’ve ever heard of. In total, more than 10,500 enterprises now use the service. While it’s best known for its alerting capabilities, PagerDuty has expanded well beyond that over the years, though it’s still a core part of its service. Earlier this year, for example, the company announced its new AIOps services that aim to help businesses reduce the amount of noisy and unnecessary alerts. I’m sure there’s a lot of engineers who are quite happy about that (and now sleep better).

Sep
05
2018
--

Google adds a bunch of rugged devices to its Android Enterprise Recommended program

Rugged smartphones, the kind of devices that business can give to their employees who work in harsh environments, are a bit of a specialty market. Few consumers, after all, choose their smartphones based on how well they survive six-foot drops. But there is definitely a market there, and IDC currently expects that the market for Android -based rugged devices will grow at 23 percent annually over the next five years.

It’s maybe no surprise that Google is now expanding its Android Enterprise Recommended program to include rugged devices, too. Chances are you’ve never heard of many of the manufacturers in this first batch (or thought of them as smartphone manufacturers): Zebra, Honeywell, Sonim, Point Mobile, Datalogic. Panasonic, which has a long history of building rugged devices, will also soon become part of this program.

The minimum requirements for these devices are pretty straightforward: they have to support Android 7+, offer security updates within 90 days of release from Google and, because they are rugged devices, after all, be certified for ingress protection and rated for drop testing. They’ll also have to support at least one more major OS release.

Today’s launch continues our commitment to improving the enterprise experience for customers,” Google writes in today’s announcement. “We hope these devices will serve existing use cases and also enable companies to pursue new mobility use cases to help them realize their goals.

Aug
29
2018
--

Google takes a step back from running the Kubernetes development infrastructure

Google today announced that it is providing the Cloud Native Computing Foundation (CNCF) with $9 million in Google Cloud credits to help further its work on the Kubernetes container orchestrator and that it is handing over operational control of the project to the community. These credits will be split over three years and are meant to cover the infrastructure costs of building, testing and distributing the Kubernetes software.

Why does this matter? Until now, Google hosted virtually all the cloud resources that supported the project, like its CI/CD testing infrastructure, container downloads and DNS services on its cloud. But Google is now taking a step back. With the Kubernetes community reaching a state of maturity, Google is transferring all of this to the community.

Between the testing infrastructure and hosting container downloads, the Kubernetes project regularly runs more than 150,000 containers on 5,000 virtual machines, so the cost of running these systems quickly adds up. The Kubernetes container registry has served almost 130 million downloads since the launch of the project.

It’s also worth noting that the CNCF now includes a wide range of members that typically compete with each other. We’re talking Alibaba Cloud, AWS, Microsoft Azure, Google Cloud, IBM Cloud, Oracle, SAP and VMware, for example. All of these profit from the work of the CNCF and the Kubernetes community. Google doesn’t say so outright, but it’s fair to assume that it wanted others to shoulder some of the burdens of running the Kubernetes infrastructure, too. Similarly, some of the members of the community surely didn’t want to be so closely tied to Google’s infrastructure, either.

“By sharing the operational responsibilities for Kubernetes with contributors to the project, we look forward to seeing the new ideas and efficiencies that all Kubernetes contributors bring to the project operations,” Google Kubernetes Engine product manager William Deniss writes in today’s announcement. He also notes that a number of Google’s will still be involved in running the Kubernetes infrastructure.

“Google’s significant financial donation to the Kubernetes community will help ensure that the project’s constant pace of innovation and broad adoption continue unabated,” said Dan Kohn, the executive director of the CNCF. “We’re thrilled to see Google Cloud transfer management of the Kubernetes testing and infrastructure projects into contributors’ hands — making the project not just open source, but openly managed, by an open community.”

It’s unclear whether the project plans to take some of the Google-hosted infrastructure and move it to another cloud, but it could definitely do so — and other cloud providers could step up and offer similar credits, too.

Aug
29
2018
--

Storage provider Cloudian raises $94M

Cloudian, a company that specializes in helping businesses store petabytes of data, today announced that it has raised a $94 million Series E funding round. Investors in this round, which is one of the largest we have seen for a storage vendor, include Digital Alpha, Fidelity Eight Roads, Goldman Sachs, INCJ, JPIC (Japan Post Investment Corporation), NTT DOCOMO Ventures and WS Investments. This round includes a $25 million investment from Digital Alpha, which was first announced earlier this year.

With this, the seven-year-old company has now raised a total of $174 million.

As the company told me, it now has about 160 employees and 240 enterprise customers. Cloudian has found its sweet spot in managing the large video archives of entertainment companies, but its customers also include healthcare companies, automobile manufacturers and Formula One teams.

What’s important to stress here is that Cloudian’s focus is on on-premise storage, not cloud storage, though it does offer support for multi-cloud data management, as well. “Data tends to be most effectively used close to where it is created and close to where it’s being used,” Cloudian VP of worldwide sales Jon Ash told me. “That’s because of latency, because of network traffic. You can almost always get better performance, better control over your data if it is being stored close to where it’s being used.” He also noted that it’s often costly and complex to move that data elsewhere, especially when you’re talking about the large amounts of information that Cloudian’s customers need to manage.

Unsurprisingly, companies that have this much data now want to use it for machine learning, too, so Cloudian is starting to get into this space, as well. As Cloudian CEO and co-founder Michael Tso also told me, companies are now aware that the data they pull in, whether from IoT sensors, cameras or medical imaging devices, will only become more valuable over time as they try to train their models. If they decide to throw the data away, they run the risk of having nothing with which to train their models.

Cloudian plans to use the new funding to expand its global sales and marketing efforts and increase its engineering team. “We have to invest in engineering and our core technology, as well,” Tso noted. “We have to innovate in new areas like AI.”

As Ash also stressed, Cloudian’s business is really data management — not just storage. “Data is coming from everywhere and it’s going everywhere,” he said. “The old-school storage platforms that were siloed just don’t work anywhere.”

Aug
23
2018
--

Mixmax launches IFTTT-like rules to help you manage your inbox

Mixmax, a service that aims to make email and other outbound communications more usable and effective, today announced the official launch of its new IFTTT-like rules for automating many of the most repetitive aspects of your daily email workflow.

On the one hand, this new feature is a bit like your standard email filter on steroids (and with connections to third-party tools like Slack, Salesforce, DocuSign, Greenhouse and Pipedrive). Thanks to this, you can now receive an SMS when a customer who spends more than $5,000 a month emails you, for example.

But rules also can be triggered by any of the third-party services the company currently supports. Maybe you want to send out a meeting reminder based on your calendar entries, for example. You can then set up a rule that always emails a reminder a day before the meeting, together with all the standard info you’d want to send in that email.

“One way we think about Mixmax is that we want to do for externally facing teams and people who talk a lot of customers what GitHub did for engineering and what Slack did for internal team communication,” Mixmax co-founder and CEO Olof Mathé told me. “That’s what we do for external communication.”

While the service started out as a basic Chrome extension for Gmail, it’s now a full-blown email automation system that offers everything from easy calendar sharing to tracking when recipients open an email and, now, building rules around that. Mathé likened it to an executive assistant, but he stressed that he doesn’t think Mixmax is taking anybody’s jobs away. “We’re not here to replace other people,” he said. “We amplify what you are able to do as an individual and give you superpowers so you can become your own personal chief of staff so you get more time.”

The new rules feature takes this to the next level and Mathé and his team plan to build this out more over time. He teased a new feature called “beast mode” that’s coming in the near future and that will see Mixmax propose actions you can take across different applications, for example.

Many of the new rules and connectors will be available to all paying users, though some features, like access to your Salesforce account, will only be available to those on higher-tier plans.

Aug
07
2018
--

Salesforce promotes COO Keith Block to co-CEO alongside founder Marc Benioff

Salesforce is moving to a two CEO model after it promoted executive Keith Block, who was most recently COO, to the position of co-CEO. Block will work alongside Salesforce’s flamboyant founder, chairman and CEO (now co-CEO) Marc Benioff, with both reporting directly to the company’s board.

Block joined Salesforce five years ago after spending 25 years at Oracle, which is where he first met Benioff, who has called him “the best sales executive the enterprise software industry has ever seen.”

News of the promotion was not expected, but in many ways it is just a more formalized continuation of the working relationship that the two executives have developed.

Block’s focus is on leading global sales, alliances and channels, industry strategy, customer success and consulting services, while he also oversees the company’s day-to-day operations. Benioff, meanwhile, heads of product, technology and culture. The latter is a major piece for Salesforce — for example, it has spent Salesforce has spent over $8 million since 2015 to address the wage gaps pertaining to race and gender, while the company has led the tech industry in pushing LGBT rights and more.

“Keith has been my trusted partner in running Salesforce for the past five years, and I’m thrilled to welcome him as co-CEO,” said Benioff in a statement. “Keith has outstanding operational expertise and corporate leadership experience, and I could not be happier for his promotion and this next level of our partnership.”

This clear division of responsibility from the start may enable Salesforce to smoothly transition to this new management structure, whilst helping it continue its incredible business growth. Revenue for the most recent quarter surpassed $3 billion for the first time, jumping 25 percent year-on-year while its share price is up 60 percent over the last twelve months.

When Block became COO in 2016, Benioff backed him to take the company past $10 billion in revenue and that feat was accomplished last November. Benioff enjoys setting targets and he’s been vocal about reaching $60 billion revenue by 2034, but in the medium term he is looking at reaching $23 billion by 2020 and the co-CEO strategy is very much a part of that growth target.

“We’ve said we’ll do $23 billion in fiscal year 2022 and we can now just see tremendous trajectory beyond that. Cementing Keith and I together as the leadership is really the key to accelerating future growth,” he told Fortune in an interview.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com