Dec
01
2020
--

AWS announces high resource Lambda functions, container image support & millisecond billing

AWS announced some big updates to its Lambda serverless function service today. For starters, starting today it will be able to deliver functions with up to 10MB of memory and 6 vCPUs (virtual CPUs). This will allow developers building more compute-intensive functions to get the resources they need.

“Starting today, you can allocate up to 10 GB of memory to a Lambda function. This is more than a 3x increase compared to previous limits. Lambda allocates CPU and other resources linearly in proportion to the amount of memory configured. That means you can now have access to up to 6 vCPUs in each execution environment,” the company wrote in a blog post announcing the new capabilities.

Serverless computing doesn’t mean there are no servers. It means that developers no longer have to worry about the compute, storage and memory requirements because the cloud provider — in this case, AWS — takes care of it for them, freeing them up to just code the application instead of deploying resources.

Today’s announcement combined with support for support for the AVX2 instruction set, means that developers can use this approach with more sophisticated technologies like machine learning, gaming and even high performance computing.

One of the beauties of this approach is that in theory you can save money because you aren’t paying for resources you aren’t using. You are only paying each time the application requires a set of resources and no more. To make this an even bigger advantage, the company also announced, “Starting today, we are rounding up duration to the nearest millisecond with no minimum execution time,” the company announced in a blog post on the new pricing approach.

Finally the company also announced container image support for Lambda functions. “To help you with that, you can now package and deploy Lambda functions as container images of up to 10 GB in size. In this way, you can also easily build and deploy larger workloads that rely on sizable dependencies, such as machine learning or data intensive workloads,” the company wrote in a blog post announcing the new capability.

All of these announcements in combination mean that you can now use Lambda functions for more intensive operations than you could previously, and the new billing approach should lower your overall spending as you make that transition to the new capabilities.

Jul
01
2020
--

Vendia raises $5.1M for its multicloud serverless platform

When the inventor of AWS Lambda, Tim Wagner, and the former head of blockchain at AWS, Shruthi Rao, co-found a startup, it’s probably worth paying attention. Vendia, as the new venture is called, combines the best of serverless and blockchain to help build a truly multicloud serverless platform for better data and code sharing.

Today, the Vendia team announced that it has raised a $5.1 million seed funding round, led by Neotribe’s Swaroop ‘Kittu’ Kolluri. Correlation Ventures, WestWave Capital, HWVP, Firebolt Ventures, Floodgate and Future\Perfect Ventures also participated in this oversubscribed round.

Image Credits: Vendia

Seeing Wagner at the helm of a blockchain-centric startup isn’t exactly a surprise. After building Lambda at AWS, he spent some time as VP of engineering at Coinbase, where he left about a year ago to build Vendia.

“One day, Coinbase approached me and said, ‘Hey, maybe we could do for the financial system what you’ve been doing over there for the cloud system,’” he told me. “And so I got interested in that. We had some conversations. I ended up going to Coinbase and spent a little over a year there as the VP of Engineering, helping them to set the stage for some of that platform work and tripling the size of the team.” He noted that Coinbase may be one of the few companies where distributed ledgers are actually mission-critical to their business, yet even Coinbase had a hard time scaling its Ethereum fleet, for example, and there was no cloud-based service available to help it do so.

Tim Wagner, Vendia co-founder and CEO. Image Credits: Vendia

“The thing that came to me as I was working there was why don’t we bring these two things together? Nobody’s thinking about how would you build a distributed ledger or blockchain as if it were a cloud service, with all the things that we’ve learned over the course of the last 10 years building out the public cloud and learning how to do it at scale,” he said.

Wagner then joined forces with Rao, who spent a lot of time in her role at AWS talking to blockchain customers. One thing she noticed was that while it makes a lot of sense to use blockchain to establish trust in a public setting, that’s really not an issue for enterprise.

“After the 500th customer, it started to make sense,” she said. “These customers had made quite a bit of investment in IoT and edge devices. They were gathering massive amounts of data. They also made investments on the other side, with AI and ML and analytics. And they said, ‘Well, there’s a lot of data and I want to push all of this data through these intelligent systems. I need a mechanism to get this data.’” But the majority of that data often comes from third-party services. At the same time, most blockchain proof of concepts weren’t moving into any real production usage because the process was often far too complex, especially enterprises that maybe wanted to connect their systems to those of their partners.

Shruthi Rao, Vendia co-founder and CBO. Image Credits: Vendia

“We are asking these partners to spin up Kubernetes clusters and install blockchain nodes. Why is that? That’s because for blockchain to bring trust into a system to ensure trust, you have to own your own data. And to own your own data, you need your own node. So we’re solving fundamentally the wrong problem,” she explained.

The first product Vendia is bringing to market is Vendia Share, a way for businesses to share data with partners (and across clouds) in real-time, all without giving up control over that data. As Wagner noted, businesses often want to share large data sets but they also want to ensure they can control who has access to that data. For those users, Vendia is essentially a virtual data lake with provenance tracking and tamper-proofing built in.

The company, which mostly raised this round after the coronavirus pandemic took hold in the U.S., is already working with a couple of design partners in multiple industries to test out its ideas, and plans to use the new funding to expand its engineering team to build out its tools.

“At Neotribe Ventures, we invest in breakthrough technologies that stretch the imagination and partner with companies that have category creation potential built upon a deep-tech platform,” said Neotribe founder and managing director Kolluri. “When we heard the Vendia story, it was a no-brainer for us. The size of the market for multiparty, multicloud data and code aggregation is enormous and only grows larger as companies capture every last bit of data. Vendia’s serverless-based technology offers benefits such as ease of experimentation, no operational heavy lifting and a pay-as-you-go pricing model, making it both very consumable and highly disruptive. Given both Tim and Shruthi’s backgrounds, we know we’ve found an ideal ‘Founder fit’ to solve this problem! We are very excited to be the lead investors and be a part of their journey.”

Apr
09
2019
--

Google Cloud Run brings serverless and containers together

Two of the biggest trends in applications development in recent years have been the rise of serverless and containerization. Today at Google Cloud Next, the company announced a new product called Cloud Run that is designed to bring the two together. At the same time, the company also announced Cloud Run for GKE, which is specifically designed to run on Google’s version of Kubernetes.

Oren Teich, director of product management for serverless, says these products came out of discussions with customers. As he points out, developers like the flexibility and agility they get using serverless architecture, but have been looking for more than just compute resources. They want to get access to the full stack, and to that end the company is announcing Cloud Run.

“Cloud Run is introducing a brand new product that takes Docker containers and instantly gives you a URL. This is completely unique in the industry. We’re taking care of everything from the top end of SSL provisioning and routing, all the way down to actually running the container for you. You pay only by the hundred milliseconds of what you need to use, and it’s end-to-end managed,” Teich explained.

As for the GKE tool, it provides the same kinds of benefits, except for developers running their containers on Google’s GKE version of Kubernetes. Keep in mind, developers could be using any version of Kubernetes their organizations happen to have chosen, so it’s not a given that they will be using Google’s flavor of Kubernetes.

“What this means is that a developer can take the exact same experience, the exact same code they’ve written — and they have G Cloud command line, the same UI and our console and they can just with one-click target the destination they want,” he said.

All of this is made possible through yet another open-source project the company introduced last year called Knative. “Cloud Run is based on Knative, an open API and runtime environment that lets you run your serverless workloads anywhere you choose — fully managed on Google Cloud Platform, on your GKE cluster or on your own self-managed Kubernetes cluster,” Teich and Eyal Manor, VP of engineering, wrote in a blog post introducing Cloud Run.

Serverless, as you probably know by now, is a bit of a misnomer. It’s not really taking away servers, but it is eliminating the need for developers to worry about them. Instead of loading their application on a particular virtual machine, the cloud provider, in this case, Google, provisions the exact level of resources required to run an operation. Once that’s done, these resources go away, so you only pay for what you use at any given moment.

Jan
09
2019
--

Amazon Aurora Serverless – The Sleeping Beauty

Amazon RDS Aurora Serverless activation times

One of the most exciting features Amazon Aurora Serverless brings to the table is its ability to go to sleep (pause) when idle. This is a fantastic feature for development and test environments. You get access to a powerful database to run tests quickly, but it goes easy on your wallet as you only pay for storage when the instance is paused.

You can configure Amazon RDS Aurora Serverless to go to sleep after a specified period of time. This can be set to anywhere between five minutes and 24 hours

configure Amazon RDS Aurora Serverless sleep time

For this feature to work, however, inactivity has to be complete. If you have so much as a single query or even maintain an idle open connection, Amazon Aurora Serverless will not be able to pause.

This means, for example, that pretty much any monitoring you may have enabled, including our own Percona Monitoring and Management (PMM) will prevent the instance from pausing. It would be great if Amazon RDS Aurora Serverless would allow us to specify user accounts to ignore, or additional service endpoints which should not prevent it from pausing, but currently you need to get by without such monitoring and diagnostic tools, or else enable them only for duration of the test run.

If you’re using Amazon Aurora Serverless to back very low traffic applications, you might consider disabling the automatic pause function, since waking up currently takes quite a while. Otherwise, your users should be prepared for a 30+ seconds wait while Amazon Aurora Serverless activates.

Having such a high time to activate means you need to be mindful of timeout configuration in your test/dev scripts so you do not have to deal with sporadic failures. Or you can also use something like the mysqladmin ping command to activate the instance before your test run.

Some activation experiments

Let’s now take a closer look at Amazon RDS Aurora Serverless activation times. These times are measured for MySQL 5.6 based Aurora Serverless – the only one currently available. I expect numbers could be different in other editions

Amazon RDS Aurora Serverless activation times

I measured the time it takes to run a trivial query (SELECT 1) after the instance goes to sleep. You’ll see I manually scaled the Amazon RDS Aurora Serverless instance to a desired capacity in ACU (Aurora Compute Units), and then had the script wait for six minutes to allow for pause to happen before running the query. The test was performed 12 times and the Min/Max/Avg times of these test runs for different settings of ACU are presented above.

You can see there is some variation between min and max times. I would expect to have even higher outliers, so plan for an activation time of more than a minute as a worst case scenario.

Also note that there is an interesting difference in the activation time between instance sizes. While in my tests the smallest possible size (2 ACU) consistently took longer to activate compared to the medium size (8 ACU), the even bigger size (64 ACU) was the slowest of all.

So make no assumptions about how long it would take for instance of given size to wake up with your workload, but rather test it if it is important consideration for you.

In some (rare) cases I also observed some internal timeouts during the resume process:

[root@ip-172-31-16-160 serverless]# mysqladmin ping -h serverless-test.cluster-XXXX.us-east-2.rds.amazonaws.com -u user -ppassword
mysqladmin: connect to server at 'serverless-test.cluster-XXXX.us-east-2.rds.amazonaws.com' failed
error: 'Database was unable to resume within timeout period.'

What about Autoscaling?

Finally, you may wonder how such Amazon Aurora Serverless pausing plays with Amazon Aurora Serverless Autoscaling ?

In my tests, I observed that resume always restores the instance size to the same ACU as it was before it was paused. However, this is where pausing configuration matters a great deal. According to this document, Amazon Aurora Serverless will not scale down more frequently than once per 900 seconds. While the document does not clarify over what period of time the conditions initiating scale down – cpu usage, connection usage etc – have to be met for scale down to be triggered, I can see that if the instance is idle for five minutes the scale down is not performed – it is just put to sleep.

At the same time, if you change this default five minute period to a longer time, the idle instance will be automatically scaled down a notch every 900 seconds before it finally goes to sleep. Consequently, when it is awakened it will not be at the last stage at which the load was applied, but instead at the stage it was at when it was scaled down. Also, scaling down is considered an event by itself, which resets the idle counter and delays the pause. For example: if the initial instance scale is 8, and the pause timer is set to 1h, it takes 1h 30 minutes for the pause to actually happen – 30 minutes to do scale down twice, plus 1 hour at the minimum size for pause to trigger

Here is a graph to illustrate this:

Amazon Aurora Serverless scale down timings

This also shows that when the load is re-applied at about 13:47, it recovers to the last number of ACU it had before the pause.

This means that a pause time of more than 15 minutes makes the pause behavior substantially different to the default.

Summary

  • Amazon Aurora Serverless automatic pause is a great for test/dev environments.
  • Resume time is relatively long, can reach as much as one minute.
  • Consider disabling automatic pausing for low traffic production applications, or at least let your users know they need to wait when they wake up the application.
  • Pause and Resume behavior is different in practice for a pause timeout of more than 15 minutes. Sticking to the default 5 minutes is recommended unless you really know what you’re doing.
Dec
17
2018
--

Amazon RDS Aurora Serverless – The Basics

amazon aurora serverless

amazon aurora serverlessWhen I attended AWS Re:Invent 2018, I saw there was a lot of attention from both customers and the AWS team on Amazon RDS Aurora Serverless. So I decided to take a deeper look at this technology, and write a series of blog posts on this topic.

In this first post of the series, you will learn about Amazon Aurora Serverless basics and use cases. In later posts, I will share benchmark results and in depth realization results.

What Amazon Aurora Serverless Is

A great source of information on this topic is How Amazon Aurora Serverless Works from the official AWS  documentation. In this article, you learn what Serverless deployment rather than provisional deployment means. Instead of specifying an instance size you specify the minimum and maximum number of “Aurora Capacity Units” you would like to have:

choose MySQL version on Aurora

Amazon Aurora setup

capacity settings on Amazon Aurora

Once you set up such an instance it will automatically scale between its minimum and maximum capacity points. You also will be able to scale it manually if you like.

One of the most interesting Aurora Serverless properties in my opinion is its ability to go into pause if it stays idle for specified period of time.

pause capacity on Amazon Aurora

This feature can save a lot of money for test/dev environment where load can be intermittent.  Be careful, though, using this for production size databases as waking up is far from instant. I’ve seen cases of it taking over 30 seconds in my experiments.

Another thing which may surprise you about Amazon Aurora Serverless, at the time of this writing, is that it is not very well coordinated with other Amazon RDS Aurora products –  it is only available as a MySQL 5.6 based edition and is not compatible with recent parallel query innovations either as it comes with list of other significant limitations. I’m sure Amazon will resolve these in due course, but for now you need to be aware of them.

A simple way to think about it is as follows: Amazon Aurora Serverless is a way to deploy Amazon Aurora so it scales automatically with load; can automatically pause when there is no load; and resume automatically when requests come in.

What Amazon Aurora Serverless is not

When I think about Serverless Computing I think about about elastic scalability across multiple servers and resource usage based pricing.   DynamoDB, another Database which is advertised as Serverless by Amazon, fits those criteria while Amazon Aurora Serverless does not.

With Amazon Aurora Serverless, for better or for worse, you’re still living in the “classical” instance word.  Aurora Capacity Units (ACUs) are pretty much CPU and Memory Capacity. You still need to understand how many database connections you are allowed to have. You still need to monitor your CPU usage on the instance to understand when auto scaling will happen.

Amazon Aurora Serverless also does not have any magic to scale you beyond single instance performance, which you can get with provisioned Amazon Aurora

Summary

I’m excited about the new possibilities Amazon Aurora Serveless offers.  As long as you do not expect magic and understand this is one of the newest products in the Amazon Aurora family, you surely should give it a try for applications which fit.

If you’re hungry for more information about Amazon Aurora Serverless and can’t wait for the next articles in this series, this article by Jeremy Daly contains a lot of great information.


Photo by Emily Hon on Unsplash

Dec
07
2018
--

Pivotal announces new serverless framework

Pivotal has always been about making open-source tools for enterprise developers, but surprisingly, up until now, the arsenal has lacked a serverless component. That changed today with the alpha launch of Pivotal Function Service.

Pivotal Function Service is a Kubernetes-based, multi-cloud function service. It’s part of the broader Pivotal vision of offering you a single platform for all your workloads on any cloud,” the company wrote in a blog post announcing the new service.

What’s interesting about Pivotal’s flavor of serverless, besides the fact that it’s based on open source, is that it has been designed to work both on-prem and in the cloud in a cloud native fashion, hence the Kubernetes-based aspect of it. This is unusual to say the least.

The idea up until now has been that the large-scale cloud providers like Amazon, Google and Microsoft could dial up whatever infrastructure your functions require, then dial them down when you’re finished without you ever having to think about the underlying infrastructure. The cloud provider deals with whatever compute, storage and memory you need to run the function, and no more.

Pivotal wants to take that same idea and make it available in the cloud across any cloud service. It also wants to make it available on-prem, which may seem curious at first, but Pivotal’s Onsi Fakhouri says customers want that same abilities both on-prem and in the cloud. “One of the key values that you often hear about serverless is that it will run down to zero and there is less utilization, but at the same time there are customers who want to explore and embrace the serverless programming paradigm on-prem,” Fakhouri said. Of course, then it is up to IT to ensure that there are sufficient resources to meet the demands of the serverless programs.

The new package includes several key components for developers, including an environment for building, deploying and managing your functions, a native eventing ability that provides a way to build rich event triggers to call whatever functionality you require and the ability to do this within a Kubernetes-based environment. This is particularly important as companies embrace a hybrid use case to manage the events across on-prem and cloud in a seamless way.

One of the advantages of Pivotal’s approach is that Pivotal can work on any cloud as an open product. This is in contrast to the cloud providers like Amazon, Google and Microsoft, which provide similar services that run exclusively on their clouds. Pivotal is not the first to build an open-source Function as a Service, but they are attempting to package it in a way that makes it easier to use.

Serverless doesn’t actually mean there are no underlying servers. Instead, it means that developers don’t have to point to any servers because the cloud provider takes care of whatever infrastructure is required. In an on-prem scenario, IT has to make those resources available.

Jul
24
2018
--

Google’s Cloud Functions serverless platform is now generally available

Cloud Functions, Google’s serverless platform that competes directly with tools like AWS Lambda and Azure Functions from Microsoft, is now generally available, the company announced at its Cloud Next conference in San Francisco today.

Google first announced Cloud Functions back in 2016, so this has been a long beta. Overall, it also always seemed as if Google wasn’t quite putting the same resources behind its serverless play when compared to its major competitors. AWS, for example, is placing a major bet on serverless, as is Microsoft. And there are also plenty of startups in this space, too.

Like all Google products that come out of beta, Cloud Functions is now backed by an SLA and the company also today announced that the service now runs in more regions in the U.S. and Europe.

In addition to these hosted options, Google also today announced its new Cloud Services platform for enterprises that want to run hybrid clouds. While this doesn’t include a self-hosted Cloud Functions option, Google is betting on Kubernetes as the foundation for businesses that want to run serverless applications (and yes, I hate the term ‘serverless,’ too) in their own data centers.

Jun
20
2018
--

Is Serverless Just a New Word for Cloud Based?

serverless architecture

serverless architectureServerless is a new buzzword in the database industry. Even though it gets tossed around often, there is some confusion about what it really means and how it really works. Serverless architectures rely on third-party Backend as a Service (BaaS) services. They can also include custom code that is run in managed, ephemeral containers on a Functions as a Service (FaaS) platform. In comparison to traditional Platform as a Service (PaaS) server architecture, where you pay a predetermined sum for your instances, serverless applications benefit from reduced costs of operations and lower complexity. They are also considered to be more agile, allowing for reduced engineering efforts.

In reality, there are still servers in a serverless architecture: they are just being used, managed, and maintained outside of the application. But isn’t that a lot like what cloud providers, such as Amazon RDS, Google Cloud, and Microsoft Azure, are already offering? Well, yes, but with several caveats.

When you use any of the aforementioned platforms, you still need to provision the types of instances that you plan to use and define how those platforms will act. For example, will it run MySQL, MongoDB, PostgreSQL, or some other tool? With serverless, these decisions are no longer needed. Instead, you simply consume resources from a shared resource pool, using whatever application suits your needs at that time. In addition, in a serverless world, you are only charged for the time that you use the server instead of being charged whether you use it a lot or a little (or not at all).

Remember When You Joined That Gym?

How many of us have purchased a gym membership at some point in our life? Oftentimes, you walk in with the best of intentions and happily enroll in a monthly plan. “For only $29.95 per month, you can use all of the resources of the gym as much as you want.” But, many of us have purchased such a membership and found that our visits to the gym dwindle over time, leaving us paying the same monthly fee for less usage.

Traditional Database as a Service (DBaaS) offerings are similar to your gym membership: you sign up, select your service options, and start using them right away. There are certainly cases of companies using those services consistently, just like there are gym members who show up faithfully month after month. But there are also companies who spin up database instances for a specific purpose, use the database instance for some amount of time, and then slowly find that they are accessing that instance less and less. However, the fees for the instance, much like the fees for your gym membership, keep getting charged.

What if we had a “pay as you go” gym plan? Well, some of those certainly exist. Serverless architecture is somewhat like this plan: you only pay for the resources when you use them, and you only pay for your specific usage. This would be like charging $5 for access to the weight room and $3 for access to the swimming pool, each time you use one or the other. The one big difference with serverless architecture for databases is that you still need to have your data stored somewhere in the environment and made available to you as needed. This would be like renting a gym locker to store your workout gear so that didn’t have to bring it back and forth each time you visited.

Obviously, you will pay for that storage, whether it is your data or your workout gear, but the storage fees are going to be less than your standard membership. The big advantage is that you have what you need when you need it, and you can access the necessary resources to use whatever you are storing.

With a serverless architecture, you store your data securely on low cost storage devices and access as needed. The resources required to process that data are available on an on demand basis. So, your charges are likely to be lower since you are paying a low fee for data storage and a usage fee on resources. This can work great for companies that do not need 24x7x365 access to their data since they are only paying for the services when they are using them. It’s also ideal for developers, who may find that they spend far more time working on their application code than testing it against the database. Instead of paying for the database resources while the data is just sitting there doing nothing, you now pay to store the data and incur the database associated fees at use time.

Benefits and Risks of Going Serverless

One of the biggest possible benefits of going with a serverless architecture is that you save money and hassle. Money can be saved since you only pay for the resources when you use them. Hassle is reduced since you don’t need to worry about the hardware on which your application runs. These can be big wins for a company, but you need to be aware of some pitfalls.

First, serverless can save you money, but there is no guarantee that it will save you money.

Consider 2 different people who have the exact same cell phone – maybe it’s your dad and your teenage daughter. These 2 users probably have very different patterns of usage: your dad uses the phone sporadically (if at all!) and your teenage daughter seems to have her phone physically attached to her. These 2 people would benefit from different service plans with their provider. For your dad, a basic plan that allows some usage (similar to the base cost of storage in our serverless database) with charges for usage above that cap would probably suffice. However, such a plan for your teenage daughter would probably spiral out of control and incur very high usage fees. For her, an unlimited plan makes sense. What is a great fit for one user is a poor fit for another, and the same is true when comparing serverless and DBaaS options.

The good news is that serverless architectures and DBaaS options, like Amazon RDS, Microsoft Azure, and Google Cloud, reduce a lot of the hassle of owning and managing servers. You no longer need to be concerned about Mean Time Between Failures, power and cooling issues, or many of the other headaches that come with maintaining your hardware. However, this can also have a negative consequence.

The challenge of enforced updates

About the only thing that is consistent about software in today’s world is that it is constantly changing. New versions are released with new features that may or may not be important to you. When a serverless provider decides to implement a new version or patch of their backend, there may be some downstream issues for you to manage. It is always important to test any new updates, but now some of the decisions about how and when to upgrade may be out of your control. Proper notification from the provider gives you a window of time for testing, but they are probably going to flip the switch regardless of whether or not you have completed all of your test cycles. This is true of both serverless and DBaaS options.

A risk of vendor lock-in

A common mantra in the software world is that we want to avoid vendor lock-in. Of course, from the provider’s side, they want to avoid customer churn, so we often find ourselves on opposite sides of the same issue. Moving to a new platform or provider becomes more complex as you cede more aspects of server management to the host. This means that serverless can cause deep lock-in since your application is designed to work with the environment as your provider has configured it. If you choose to move to a different provider, you need to extract your application and your data from the current provider and probably need to rework it to fit the requirements of the new provider.

The challenge of client-side optimization

Another consideration is that optimizations of server-side configurations must necessarily be more generic compared to those you might make to self-hosted servers. Optimization can no longer be done at the server level for your specific application and use; instead, you now rely on a smarter client to perform your necessary optimizations. This requires a skill set that may not exist with some developers: the ability to tune applications client-side.

Conclusion

Serverless is not going away. In fact, it is likely to grow as people come to a better understanding and comfort level with it. You need to be able to make an informed decision regarding whether serverless is right for you. Careful consideration of the pros and cons is imperative for making a solid determination. Understanding your usage patterns, user expectations, development capabilities, and a lot more will help to guide that decision.

In a future post, I’ll review the architectural differences between on-premises, PaaS, DBaaS and serverless database environments.

 

The post Is Serverless Just a New Word for Cloud Based? appeared first on Percona Database Performance Blog.

Jan
30
2018
--

Microsoft’s Azure Event Grid hits general availability

 With Event Grid, Microsoft introduced a new Azure service last year that it hopes will become the glue that holds together modern event-driven and distributed applications. Starting today, Event Grid is generally available, with all the SLAs and other premises this entails. Read More

Dec
05
2017
--

Pivotal has something for everyone in the latest Cloud Foundry Platform release

 Pivotal wants to be the development platform that serves everyone, and today at their SpringOne Platform (S1P) developer conference in San Francisco, they announced a huge upgrade to their Pivotal Cloud Foundry platform (PCF) that includes support for serverless computing, containers and a new app store. As James Watters, senior VP of strategy sees it, this is all part of a deliberate strategy… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com