Sep
24
2018
--

Microsoft’s SQL Server gets built-in support for Spark and Hadoop

It’s time for the next version of SQL Server, Microsoft’s flagship database product. The company today announced the first public preview of SQL Server 2019 and while yet another update to a proprietary database may not seem all that exciting at first glance, Microsoft is trying to do a few rather interesting things with this update.

What’s at the core of all of the most interesting updates is an acknowledgement that there are very few companies that only use a single database product. So what Microsoft is doing with SQL Server is adding new connectors that allow business to use SQL Server to query other databases, including those of Oracle, Teradata and MongoDB. This turns SQL Server into something of a virtual integration layer — yet the data never needs to be replicated or moved to SQL Server.

But there is more! SQL Server 2019 will come with built-in support for Spark and the Hadoop File System. That’s an acknowledgement of the popularity of these open-source tools, as well as the importance of big data workloads that SQL Server, if it wants to say relevant, has to be able to support, too, and it has to do so in a way that many companies are already familiar with.

There’s another open source aspect here, too: SQL Server will support these big data clusters with the help of the Google-incubated Kubernetes container orchestration system. Every cluster will include SQL Server, the Hadoop file system and Spark.

As for the name, it’s worth noting that many pundits expected a “SQL Server 2018,” but Microsoft opted to skip a year after SQL Server 2016 and 2017. So SQL Server 2019 it is.

more Microsoft Ignite 2018 coverage

Aug
08
2018
--

Oracle’s database service offerings could be its last best hope for cloud success

Yesterday Oracle announced a new online transaction processing database service, finally bringing its key database technology into the cloud. The company, which has been around for over four decades made its mark selling databases to the biggest companies in the world, but as the world has changed, large enterprise customers have been moving increasingly to the cloud. These autonomous database products could mark Oracle’s best hope for cloud success.

The database giant, which has a market cap of over $194 billion and over $67 billion in cash on hand certainly has options no matter what happens with its cloud products. Yet if the future of enterprise computing is in the cloud, the company needs to find some sustained success there, and what better way to lure its existing customers than with its bread and butter database products.

Oracle has demonstrated a stronger commitment to the cloud in recent years after showing much disdain for it. In fact, it announced it would be building 12 new regional data centers earlier this year alone, but it wasn’t always that way. Company founder and executive chairman Larry Ellison famously made fun of the cloud as “more fashion driven than women’s fashion.” Granted that was in 2008, but his company certainly came late to the party.

A different kind of selling

The cloud is not just a different way of delivering software, platform and infrastructure, it’s a different way of selling. While switching databases might not be an easy thing to do for most large companies, the cloud subscription payment model still offers a way out that licensing rarely did. As such, it requires more of a partnership between vendor and customer. After years of having a reputation of being aggressive with customers, it may be even harder for them to make this shift.

Salesforce exec Keith Block (who was promoted to Co-CEO just yesterday), worked at Oracle for 20 years before joining Salesforce in 2013. In an interview with TechCrunch in 2016, when asked specifically about the differences between Oracle and Salesforce, he contrasted the two company’s approaches and the challenges a company like Oracle, born and raised in the open prem world, faces as it shifts to the cloud. It takes more than a change in platform, he said.

“You also have to have the right business model and when you think about our business model, it is a ‘shared success model’. Basically, as you adopt the technology, it’s married to our payment schemes. So that’s very, very important because if the customer doesn’t win, we don’t win,” Block said at the time.

John Dinsdale, chief analyst and managing director at Synergy Research, a firm that keeps close watch on the cloud market, agrees that companies born on-prem face adjustments when moving to the cloud. “In order to survive and thrive in today’s cloud-oriented environment, any software company that grew up in the on-prem world needs to have powerful, cost-effective products that can be packaged and delivered flexibly – irrespective of whether that is via the cloud or via some form of enhanced on-prem solution,” he said.

Database as a Service or bust

All that said, if Oracle could adjust, it has the advantage of having a foothold inside the enterprise. It also claims a painless transition from on-prem Oracle database to its database cloud service, which if a company is considering moving to the cloud could be attractive. There is also the autonomous aspect of its cloud database offerings, which promises to be self-tuning, self-healing with automated maintenance and updates and very little downtime.

Carl Olofson, an analyst with IDC who covers the database market sees Oracle’s database service offerings as critical to its cloud aspirations, but expects business could move slowly here. “Certainly, this development (Oracle’s database offerings) looms large for those whose core systems run on Oracle Database, but there are other factors to consider, including any planned or active investment in SaaS on other cloud platforms, the overall future database strategy, the complexity of moving operations from the datacenter to the cloud, and so on. So, I expect actual movement here to be gradual.” he said.

Adam Ronthal, an analyst at Gartner sees the database service offerings as Oracle’s best chance for cloud success. “The Autonomous Data Warehouse and the Autonomous Transaction Processing offerings are really the first true cloud offerings from Oracle. They are designed and architected for cloud, and priced competitively. They are strategic and it is very important for Oracle to demonstrate success and value with these offerings as they build credibility and momentum for their cloud offerings,” he said.

The big question is can Oracle deliver in a cloud context using a more collaborative sales model, which is still not clear. While it showed some early success as it has transitioned to the cloud, it’s always easier to move from a small market share number to a bigger one, and the numbers (when they have given them) have flipped in the wrong direction in recent earnings reports.

As the stakes grow ever higher, Oracle is betting on what it’s known best all along, the databases that made the company. We’ll have to wait and see if that bet pays off or if Oracle’s days of database dominance are numbered as business looks to public cloud alternatives.

Aug
07
2018
--

Oracle launches autonomous database for online transaction processing

Oracle executive chairman and CTO, Larry Ellison, first introduced the company’s autonomous database at Oracle Open World last year. The company later launched an autonomous data warehouse. Today, it announced the next step with the launch of the Oracle Autonomous Transaction Processing (ATP) service.

This latest autonomous database tool promises the same level of autonomy — self-repairing, automated updates and security patches and minutes or less of downtime a month. Juan Loaiza SVP for Oracle Systems at the database giant says the ATP cloud service is a modernized extension of the online transaction processing databases (OLTP) they have been creating for decades. It has machine learning and automation underpinnings, but it should feel familiar to customers, he says.

“Most of the major companies in the world are running thousands of Oracle databases today. So one simple differentiation for us is that you can just pick up your on-premises database that you’ve had for however many years, and you can easily move it to an autonomous database in the cloud,” Loaiza told TechCrunch.

He says that companies already running OLTP databases are ones like airlines, big banks and financial services companies, online retailers and other mega companies who can’t afford even a half hour of downtime a month. He claims that with Oracle’s autonomous database, the high end of downtime is 2.5 minutes per month and the goal is to get much lower, basically nothing.

Carl Olofson, an IDC analyst who manages IDC’s database management practice says the product promises much lower operational costs and could give Oracle a leg up in the Database as a Service market. “What Oracle offers that is most significant here is the fact that patches are applied without any operational disruption, and that the database is self-tuning and, to a large degree, self-healing. Given the highly variable nature of OLTP database issues that can arise, that’s quite something,” he said.

Adam Ronthal, an analyst at Gartner who focuses on the database market, says the autonomous database product set will be an important part of Oracle’s push to the cloud moving forward. “These announcements are more cloud announcements than database announcements. They are Oracle coming out to the world with products that are built and architected for cloud and everything that implies — scalability, elasticity and a low operational footprint. Make no mistake, Oracle still has to prove themselves in the cloud. They are behind AWS and Azure and even GCP in breadth and scope of offerings. ATP helps close that gap, at least in the data management space,” he said.

Oracle certainly needs a cloud win as its cloud business has been heading in the wrong direction the last couple of earnings report to the point they stopped breaking out the cloud numbers in the June report.

Ronthal says Oracle needs to gain some traction quickly with existing customers if it’s going to be successful here. “Oracle needs to build some solid early successes in their cloud, and these successes are going to come from the existing customer base who are already strategically committed to Oracle databases and are not interested in moving. (This is not all of the customer base, of course.) Once they demonstrate solid successes there, they will be able to expand to net new customers,” he says.

Regardless how it works out for Oracle, the ATP database service will be available as of today.

Apr
25
2018
--

Google Cloud expands its bet on managed database services

Google announced a number of updates to its cloud-based database services today. For the most part, we’re not talking about any groundbreaking new products here, but all of these updates address specific pain points that enterprises suffer when they move to the cloud.

As Google Director of Product Management Dominic Preuss told me ahead of today’s announcements, Google long saw itself as a thought leader in the database space. For the longest time, though, that thought leadership was all about things like the Bigtable paper and didn’t really manifest itself in the form of products. Projects like the globally distributed Cloud Spanner database are now allowing Google Cloud to put its stamp on this market.

Preuss also noted that many of Google’s enterprise users often start with lifting and shifting their existing workloads to the cloud. Once they have done that, though, they are also looking to launch new applications in the cloud — and at that point, they typically want managed services that free them from having to do the grunt work of managing their own infrastructure.

Today’s announcements mostly fit into this mold of offering enterprises the kind of managed database services they are asking for.

The first of these is the beta launch of Cloud Memorystore for Redis, a fully managed in-memory data store for users who need in-memory caching for capacity buffering and similar use cases.

Google is also launching a new feature for Cloud Bigtable, the company’s NoSQL database service for big data workloads. Bigtable now features regional replication (or at least it will, once this has rolled out to all users within the next week or so). The general idea here is to give enterprises that previously used Cassandra for their on-premises workloads an alternative in the Google Cloud portfolio, and these cross-zone replications increase the availability and durability of the data they store in the service.

With this update, Google is also making Cloud SQL for PostgreSQL generally available with a 99.95 percent SLA, and it’s adding commit timestamps to Cloud Spanner.

What’s next for Google’s database portfolio? Unsurprisingly, Preuss wouldn’t say, but he did note that the company wants to help enterprises move as many of their workloads to the cloud as they can — and for the most part, that means managed services.

Apr
21
2018
--

Timescale is leading the next wave of NYC database tech

Data is the lifeblood of the modern corporation, yet acquiring, storing, processing, and analyzing it remains a remarkably challenging and expensive project. Every time data infrastructure finally catches up with the streams of information pouring in, another source and more demanding decision-making makes the existing technology obsolete.

Few cities rely on data the same way as New York City, nor has any other city so shaped the technology that underpins our data infrastructure. Back in the 1960s, banks and accounting firms helped to drive much of the original computation industry with their massive finance applications. Today, that industry has been supplanted by finance and advertising, both of which need to make microsecond decisions based on petabyte datasets and complex statistical models.

Unsurprisingly, the city’s hunger for data has led to waves of database companies finding their home in the city.

As web applications became increasingly popular in the mid-aughts, SQL databases came under increasing strain to scale, while also proving to be inflexible in terms of their data schemas for the fast-moving startups they served. That problem spawned Manhattan-based MongoDB, whose flexible “NoSQL” schemas and horizontal scaling capabilities made it the default choice for a generation of startups. The company would go on to raise $311 million according to Crunchbase, and debuted late last year on NASDAQ, trading today with a market cap of $2 billion.

At the same time that the NoSQL movement was hitting its stride, academic researchers and entrepreneurs were exploring how to evolve SQL to scale like its NoSQL competitors, while retaining the kinds of features (joining tables, transactions) that make SQL so convenient for developers.

One leading company in this next generation of database tech is New York-based Cockroach Labs, which was founded in 2015 by a trio of former Square, Viewfinder, and Google engineers. The company has gone on to raise more than $50 million according to Crunchbase from a luminary list of investors including Peter Fenton at Benchmark, Mike Volpi at Index, and Satish Dharmaraj at Redpoint, along with GV and Sequoia.

While web applications have their own peculiar data needs, the rise of the internet of things (IoT) created a whole new set of data challenges. How can streams of data from potentially millions of devices be stored in an easily analyzable manner? How could companies build real-time systems to respond to that data?

Mike Freedman and Ajay Kulkarni saw that problem increasingly manifesting itself in 2015. The two had been roommates at MIT in the late 90s, and then went on separate paths into academia and industry respectively. Freedman went to Stanford for a PhD in computer science, and nearly joined the spinout of Nicira, which sold to VMware in 2012 for $1.26 billion. Kulkarni joked that “Mike made the financially wise decision of not joining them,” and Freedman eventually went to Princeton as an assistant professor, and was awarded tenure in 2013. Kulkarni founded and worked at a variety of startups including GroupMe, as well as receiving an MBA from MIT.

The two had startup dreams, and tried building an IoT platform. As they started building it though, they realized they would need a real-time database to process the data streams coming in from devices. “There are a lot of time series databases, [so] let’s grab one off the shelf, and then we evaluated a few,” Kulkarni explained. They realized what they needed was a hybrid of SQL and NoSQL, and nothing they could find offered the feature set they required to power their platform. That challenge became the problem to be solved, and Timescale was born.

In many ways, Timescale is how you build a database in 2018. Rather than starting de novo, the team decided to build on top of Postgres, a popular open-source SQL database. “By building on top of Postgres, we became the more reliable option,” Kulkarni said of their thinking. In addition, the company opted to make the database fully open source. “In this day and age, in order to get wide adoption, you have to be an open source database company,” he said.

Since the project’s first public git commit on October 18, 2016, the company’s database has received nearly 4,500 stars on Github, and it has raised $16.1 million from Benchmark and NEA .

Far more important though are their customers, who are definitely not the typical tech startup roster and include companies from oil and gas, mining, and telecommunications. “You don’t think of them as early adopters, but they have a need, and because we built it on top of Postgres, it integrates into an ecosystem that they know,” Freedman explained. Kulkarni continued, “And the problem they have is that they have all of this time series data, and it isn’t sitting in the corner, it is integrated with their core service.”

New York has been a strong home for the two founders. Freedman continues to be a professor at Princeton, where he has built a pipeline of potential grads for the company. More widely, Kulkarni said, “Some of the most experienced people in databases are in the financial industry, and that’s here.” That’s evident in one of their investors, hedge fund Two Sigma. “Two Sigma had been the only venture firm that we talked to that already had built out their own time series database,” Kulkarni noted.

The two also benefit from paying customers. “I think the Bay Area is great for open source adoption, but a lot of Bay Area companies, they develop their own database tech, or they use an open source project and never pay for it,” Kulkarni said. Being in New York has meant closer collaboration with customers, and ultimately more revenues.

Open source plus revenues. It’s the database way, and the next wave of innovation in the NYC enterprise infrastructure ecosystem.

Mar
13
2018
--

Don’t Get Hit with a Database Disaster: Database Security Compliance

Percona Live 2018 security talks

In this post, we discuss database security compliance, what you should be looking at and where to get more information.

As Percona’s Chief Customer Officer, I get the opportunity to talk with a lot of customers. Hearing about the problems that both their technical teams face, as well as the business challenges their companies experience first-hand is incredibly valuable in terms of what the market is facing in general. Not every problem you see has a purely technical solution, and not every good technical solution solves the core business problem.

Matt Yonkovit, Percona CCOAs database technology advances and data continues to be the core blood of most modern applications, DBA’s will have a say in business level strategic planning more than ever. This coincides with the advances in technology and automation that make many classic manual “DBA” jobs and tasks obsolete. Traditional DBA’s are evolving into a blend of system architect, data strategist and master database architect. I want to talk about the business problems that not only the C-Suite care about, but DBAs as a whole need to care about in the near future.

Let’s start with one topic everyone should have near the top of their list: security.

We did a recent survey of our customers, and their biggest concern right now is security and compliance.

Not long ago, most DBA’s I knew dismissed this topic as “someone else’s problem” (I remember being told that the database is only as secure as the network, so fix the network!). Long gone are the days when network security was enough. Even the DBA’s who did worry about security only did so within the limited scope of what the database system could provide out of the box.  Again, not enough.

So let me run an experiment:

Raise your hand if your company has some bigger security initiative this year. 

I’m betting a lot of you raised your hand!

Security is not new to the enterprise. It’s been a priority for years now. However, it has not been receiving a hyper-focus in the open source database space until the last three years or so. Why? There have been a number of high profile database security breaches in the last year, all highlighting a need for better database security. This series of serious data breaches have exposed how fragile some security protocols in companies are. If that was not enough, new government regulations and laws have made data protection non-optional. This means you have to take the security of your database seriously, or there could be fines and penalties.

Percona Live 2018 security talksGovernment regulations are nothing new, but the breadth and depth of these are growing and are opening up a whole new challenge for databases systems and administrators. GDPR was signed into law two years ago (you can read more here: https://en.wikipedia.org/wiki/General_Data_Protection_Regulation and https://www.dataiq.co.uk/blog/summary-eu-general-data-protection-regulation) and is scheduled to take effect on May 25, 2018. This has many businesses scrambling not only to understand the impact, but figure out how they need to comply. These regulations redefine simple things, like what constitutes “personal data” (for instance, your anonymous buying preferences or location history even without your name).

New requirements also mean some areas get a bit more complicated as they approach the gray area of definition. For instance, GDPR guarantees the right to be forgotten. What does this mean? In theory, it means end-users can request that all their personal information is removed from your systems as if they did not exist. Seems simple, but in reality, you can go as far down the rabbit hole as you want. Does your application support this already? What about legacy applications? Even if the apps can handle it, does this mean previously taken database backups have to forget you as well? There is a lot to process for sure.

So what are the things you can do?

  1. Educate yourself and understand expectations, even if you weren’t involved in compliance discussions before.
  2. Start working on incremental improvements now on your data security. This is especially true in the area’s where you have some control, without massive changes to the application. Encryption at rest is a great place to start if you don’t have it.
  3. Start talking with others in the organization about how to identify and protect personal information.
  4. Look to increase security by default by getting involved in new applications early in the design phase.

The good news is you are not alone in tackling this challenge. Every company must address it. Because of this focus on security, we felt strongly about ensuring we had a security track at Percona Live 2018 this year. These talks from Fastly, Facebook, Percona, and others provide information on how companies around the globe are tackling these security issues. In true open source fashion, we are better when we learn and grow from one another.

What are the Percona Live 2018 security talks?

We have a ton of great security content this year at Percona Live, across a bunch of technologies and open source software. Some of the more interesting Percona Live 2018 security talks are:

Want to attend Percona Live 2018 security talks? Register for Percona Live 2018. Register now to get the best price! Use the discount code SeeMeSpeakPL18 for 10% off.

Percona Live Open Source Database Conference 2018 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference will be April 23-25, 2018 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

Feb
15
2018
--

MongoDB gets support for multi-document ACID transactions

 MongoDB is finally getting support for multi-document ACID (atomicity, consistency, isolation, durability) transactions. That’s something the MongoDB community has been asking for for years and MongoDB Inc, the company behind the project, is now about to make this a reality. As the company will announce at an event later today, support for ACID transactions will launch when it ships… Read More

Feb
12
2018
--

Oracle to expand automation capabilities across developer cloud services

Larry Ellison, chairman of Oracle Corp. Last fall at Oracle OpenWorld, chairman Larry Ellison showed he was a man of the people by comparing the company’s new autonomous database service to auto-pilot on his private plane. Regardless, those autonomous capabilities were pretty advanced, providing customers with a self-provisioning, self-tuning and self-repairing database. Today, Oracle announced it was expanding that… Read More

Nov
14
2017
--

Google Cloud Spanner update includes SLA that promises less than five minutes of downtime per year

 Cloud Spanner, Google’s globally distributed cloud database got an update today that includes multi-region support, meaning the database can be replicated across regions for lower latency and better performance. It also got an updated Service Level Agreement (SLA) that should please customers. The latter states Cloud Spanner databases will have 99.999% (five nines) availability, a level… Read More

Oct
03
2017
--

Webinar October 4, 2017: Databases in the Hosted Cloud

Databases in the Hosted Cloud 1

Join Percona’s Chief Evangelist, Colin Charles as he presents Databases in the Hosted Cloud on Wednesday, October 4, 2017, at 7:00 am PDT / 10:00 am EDT (UTC-7).Databases in the Hosted Cloud 1


Today you can use hosted MySQL/MariaDB/Percona Server for MySQL/PostgreSQL in several “cloud providers” as a database as a service (DBaaS). Learn the differences, the access methods and the level of control you have for the various public databases in the hosted cloud offerings:

  • Amazon RDS including Aurora
  • Google Cloud SQL
  • Rackspace OpenStack DBaaS
  • Oracle Cloud’s MySQL Service

The administration tools and ideologies behind each are completely different, and you are in a “locked-down” environment. Some considerations include:

  • Different backup strategies
  • Planning for multiple data centers for availability
  • Where do you host your application?
  • How do you get the most performance out of the solution?
  • What does this all cost?
  • Monitoring

Growth topics include:

  • How do you move from one DBaaS to another?
  • How do you move from a DBaaS to your own hosted platform?

Register for the webinar here.

Securing Your MySQLColin Charles, Chief Evangelist

Colin Charles is the Chief Evangelist at Percona. He was previously on the founding team for MariaDB Server in 2009, worked in MySQL since 2005 and been a MySQL user since 2000. Before joining MySQL, he worked actively on the Fedora and OpenOffice.org projects. He’s well known within many open source communities and has spoken on the conference circuit.

 

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com