Sep
07
2021
--

Seqera Labs grabs $5.5M to help sequence COVID-19 variants and other complex data problems

Bringing order and understanding to unstructured information located across disparate silos has been one of the more significant breakthroughs of the big data era, and today a European startup that has built a platform to help with this challenge specifically in the area of life sciences — and has, notably, been used by labs to sequence and so far identify two major COVID-19 variants — is announcing some funding to continue building out its tools to a wider set of use cases, and to expand into North America.

Seqera Labs, a Barcelona-based data orchestration and workflow platform tailored to help scientists and engineers order and gain insights from cloud-based genomic data troves, as well as to tackle other life science applications that involve harnessing complex data from multiple locations, has raised $5.5 million in seed funding.

Talis Capital and Speedinvest co-led this round, with participation also from previous backer BoxOne Ventures and a grant from the Chan Zuckerberg Initiative, Mark Zuckerberg and Dr. Priscilla Chan’s effort to back open source software projects for science applications.

Seqera — a portmanteau of “sequence” and “era”, the age of sequencing data, basically — had previously raised less than $1 million, and quietly, it is already generating revenues, with five of the world’s biggest pharmaceutical companies part of its customer base, alongside biotech and other life sciences customers.

Seqera was spun out of the Centre for Genomic Regulation, a biomedical research center based out of Barcelona, where it was built as the commercial application of Nextflow, open source workflow and data orchestration software originally created by the founders of Seqera, Evan Floden and Paolo Di Tommaso, at the CGR.

Floden, Seqera’s CEO, told TechCrunch that he and Di Tommaso were motivated to create Seqera in 2018 after seeing Nextflow gain a lot of traction in the life science community, and subsequently getting a lot of repeat requests for further customization and features. Both Nextflow and Seqera have seen a lot of usage: the Nextflow runtime has been downloaded more than 2 million times, the company said, while Seqera’s commercial cloud offering has now processed more than 5 billion tasks.

The COVID-19 pandemic is a classic example of the acute challenge that Seqera (and by association Nextflow) aims to address in the scientific community. With COVID-19 outbreaks happening globally, each time a test for COVID-19 is processed in a lab, live genetic samples of the virus get collected. Taken together, these millions of tests represent a goldmine of information about the coronavirus and how it is mutating, and when and where it is doing so. For a new virus about which so little is understood and that is still persisting, that’s invaluable data.

So the problem is not if the data exists for better insights (it does); it is that it’s nearly impossible to use more legacy tools to view that data as a holistic body. It’s in too many places, and there is just too much of it, and it’s growing every day (and changing every day), which means that traditional approaches of porting data to a centralized location to run analytics on it just wouldn’t be efficient, and would cost a fortune to execute.

That is where Segera comes in. The company’s technology treats each source of data across different clouds as a salient pipeline which can be merged and analyzed as a single body, without that data ever leaving the boundaries of the infrastructure where it already exists. Customised to focus on genomic troves, scientists can then query that information for more insights. Seqera was central to the discovery of both the Alpha and Delta variants of the virus, and work is still ongoing as COVID-19 continues to hammer the globe.

Seqera is being used in other kinds of medical applications, such as in the realm of so-called “precision medicine.” This is emerging as a very big opportunity in complex fields like oncology: cancer mutates and behaves differently depending on many factors, including genetic differences of the patients themselves, which means that treatments are less effective if they are “one size fits all.”

Increasingly, we are seeing approaches that leverage machine learning and big data analytics to better understand individual cancers and how they develop for different populations, to subsequently create more personalized treatments, and Seqera comes into play as a way to sequence that kind of data.

This also highlights something else notable about the Seqera platform: it is used directly by the people who are analyzing the data — that is, the researchers and scientists themselves, without data specialists necessarily needing to get involved. This was a practical priority for the company, Floden told me, but nonetheless, it’s an interesting detail of how the platform is inadvertently part of that bigger trend of “no-code/low-code” software, designed to make highly technical processes usable by non-technical people.

It’s both the existing opportunity and how Seqera might be applied in the future across other kinds of data that lives in the cloud that makes it an interesting company, and it seems an interesting investment, too.

“Advancements in machine learning, and the proliferation of volumes and types of data, are leading to increasingly more applications of computer science in life sciences and biology,” said Kirill Tasilov, principal at Talis Capital, in a statement. “While this is incredibly exciting from a humanity perspective, it’s also skyrocketing the cost of experiments to sometimes millions of dollars per project as they become computer-heavy and complex to run. Nextflow is already a ubiquitous solution in this space and Seqera is driving those capabilities at an enterprise level – and in doing so, is bringing the entire life sciences industry into the modern age. We’re thrilled to be a part of Seqera’s journey.”

“With the explosion of biological data from cheap, commercial DNA sequencing, there is a pressing need to analyse increasingly growing and complex quantities of data,” added Arnaud Bakker, principal at Speedinvest. “Seqera’s open and cloud-first framework provides an advanced tooling kit allowing organisations to scale complex deployments of data analysis and enable data-driven life sciences solutions.”

Although medicine and life sciences are perhaps Seqera’s most obvious and timely applications today, the framework originally designed for genetics and biology can be applied to any a number of other areas: AI training, image analysis and astronomy are three early use cases, Floden said. Astronomy is perhaps very apt, since it seems that the sky is the limit.

“We think we are in the century of biology,” Floden said. “It’s the center of activity and it’s becoming data-centric, and we are here to build services around that.”

Seqera is not disclosing its valuation with this round.

Sep
08
2020
--

Google Cloud launches its Business Application Platform based on Apigee and AppSheet

Unlike some of its competitors, Google Cloud has recently started emphasizing how its large lineup of different services can be combined to solve common business problems. Instead of trying to sell individual services, Google is focusing on solutions and the latest effort here is what it calls its Business Application Platform, which combines the API management capabilities of Apigee with the no-code application development platform of AppSheet, which Google acquired earlier this year.

As part of this process, Google is also launching a number of new features for both services today. The company is launching the beta of a new API Gateway, built on top of the open-source Envoy project, for example. This is a fully managed service that is meant to make it easier for developers to secure and manage their API across Google’s cloud computing services and serverless offerings like Cloud Functions and Cloud Run. The new gateway, which has been in alpha for a while now, offers all the standard features you’d expect, including authentication, key validation and rate limiting.

As for its low-code service AppSheet, the Google Cloud team is now making it easier to bring in data from third-party applications thanks to the general availability to Apigee as a data source for the service. AppSheet already supported standard sources like MySQL, Salesforce and G Suite, but this new feature adds a lot of flexibility to the service.

With more data comes more complexity, so AppSheet is also launching new tools for automating processes inside the service today, thanks to the early access launch of AppSheet Automation. Like the rest of AppSheet, the promise here is that developers won’t have to write any code. Instead, AppSheet Automation provides a visual interface, that, according to Google, “provides contextual suggestions based on natural language inputs.” 

“We are confident the new category of business application platforms will help empower both technical and line of business developers with the core ability to create and extend applications, build and automate workflows, and connect and modernize applications,” Google notes in today’s announcement. And indeed, this looks like a smart way to combine the no-code environment of AppSheet with the power of Apigee .

Jul
14
2020
--

Google Cloud’s new BigQuery Omni will let developers query data in GCP, AWS and Azure

At its virtual Cloud Next ’20 event, Google today announced a number of updates to its cloud portfolio, but the private alpha launch of BigQuery Omni is probably the highlight of this year’s event. Powered by Google Cloud’s Anthos hybrid-cloud platform, BigQuery Omni allows developers to use the BigQuery engine to analyze data that sits in multiple clouds, including those of Google Cloud competitors like AWS and Microsoft Azure — though for now, the service only supports AWS, with Azure support coming later.

Using a unified interface, developers can analyze this data locally without having to move data sets between platforms.

“Our customers store petabytes of information in BigQuery, with the knowledge that it is safe and that it’s protected,” said Debanjan Saha, the GM and VP of Engineering for Data Analytics at Google Cloud, in a press conference ahead of today’s announcement. “A lot of our customers do many different types of analytics in BigQuery. For example, they use the built-in machine learning capabilities to run real-time analytics and predictive analytics. […] A lot of our customers who are very excited about using BigQuery in GCP are also asking, ‘how can they extend the use of BigQuery to other clouds?’ ”

Image Credits: Google

Google has long said that it believes that multi-cloud is the future — something that most of its competitors would probably agree with, though they all would obviously like you to use their tools, even if the data sits in other clouds or is generated off-platform. It’s the tools and services that help businesses to make use of all of this data, after all, where the different vendors can differentiate themselves from each other. Maybe it’s no surprise then, given Google Cloud’s expertise in data analytics, that BigQuery is now joining the multi-cloud fray.

“With BigQuery Omni customers get what they wanted,” Saha said. “They wanted to analyze their data no matter where the data sits and they get it today with BigQuery Omni.”

Image Credits: Google

He noted that Google Cloud believes that this will help enterprises break down their data silos and gain new insights into their data, all while allowing developers and analysts to use a standard SQL interface.

Today’s announcement is also a good example of how Google’s bet on Anthos is paying off by making it easier for the company to not just allow its customers to manage their multi-cloud deployments but also to extend the reach of its own products across clouds. This also explains why BigQuery Omni isn’t available for Azure yet, given that Anthos for Azure is still in preview, while AWS support became generally available in April.

Sep
11
2019
--

IBM brings Cloud Foundry and Red Hat OpenShift together

At the Cloud Foundry Summit in The Hague, IBM today showcased its Cloud Foundry Enterprise Environment on Red Hat’s OpenShift container platform.

For the longest time, the open-source Cloud Foundry Platform-as-a-Service ecosystem and Red Hat’s Kubernetes-centric OpenShift were mostly seen as competitors, with both tools vying for enterprise customers who want to modernize their application development and delivery platforms. But a lot of things have changed in recent times. On the technical side, Cloud Foundry started adopting Kubernetes as an option for application deployments and as a way of containerizing and running Cloud Foundry itself.

On the business side, IBM’s acquisition of Red Hat has brought along some change, too. IBM long backed Cloud Foundry as a top-level foundation member, while Red Hat bet on its own platform instead. Now that the acquisition has closed, it’s maybe no surprise that IBM is working on bringing Cloud Foundry to Red Hat’s platform.

For now, this work is still officially still a technology experiment, but our understanding is that IBM plans to turn this into a fully supported project that will give Cloud Foundry users the option to deploy their application right to OpenShift, while OpenShift customers will be able to offer their developers the Cloud Foundry experience.

“It’s another proof point that these things really work well together,” Cloud Foundry Foundation CTO Chip Childers told me ahead of today’s announcement. “That’s the developer experience that the CF community brings and in the case of IBM, that’s a great commercialization story for them.”

While Cloud Foundry isn’t seeing the same hype as in some of its earlier years, it remains one of the most widely used development platforms in large enterprises. According to the Cloud Foundry Foundation’s latest user survey, the companies that are already using it continue to move more of their development work onto the platform and the according to the code analysis from source{d}, the project continues to see over 50,000 commits per month.

“As businesses navigate digital transformation and developers drive innovation across cloud native environments, one thing is very clear: they are turning to Cloud Foundry as a proven, agile, and flexible platform — not to mention fast — for building into the future,” said Abby Kearns, executive director at the Cloud Foundry Foundation. “The survey also underscores the anchor Cloud Foundry provides across the enterprise, enabling developers to build, support, and maximize emerging technologies.”image024

Also at this week’s Summit, Pivotal (which is in the process of being acquired by VMware) is launching the alpha version of the Pivotal Application Service (PAS) on Kubernetes, while Swisscom, an early Cloud Foundry backer, is launching a major update to its Cloud Foundry-based Application Cloud.

May
21
2019
--

Microsoft makes a push for service mesh interoperability

Services meshes. They are the hot new thing in the cloud native computing world. At KubeCon, the bi-annual festival of all things cloud native, Microsoft today announced that it is teaming up with a number of companies in this space to create a generic service mesh interface. This will make it easier for developers to adopt the concept without locking them into a specific technology.

In a world where the number of network endpoints continues to increase as developers launch new micro-services, containers and other systems at a rapid clip, they are making the network smarter again by handling encryption, traffic management and other functions so that the actual applications don’t have to worry about that. With a number of competing service mesh technologies, though, including the likes of Istio and Linkerd, developers currently have to choose which one of these to support.

“I’m really thrilled to see that we were able to pull together a pretty broad consortium of folks from across the industry to help us drive some interoperability in the service mesh space,” Gabe Monroy, Microsoft’s lead product manager for containers and the former CTO of Deis, told me. “This is obviously hot technology — and for good reasons. The cloud-native ecosystem is driving the need for smarter networks and smarter pipes and service mesh technology provides answers.”

The partners here include Buoyant, HashiCorp, Solo.io, Red Hat, AspenMesh, Weaveworks, Docker, Rancher, Pivotal, Kinvolk and VMware . That’s a pretty broad coalition, though it notably doesn’t include cloud heavyweights like Google, the company behind Istio, and AWS.

“In a rapidly evolving ecosystem, having a set of common standards is critical to preserving the best possible end-user experience,” said Idit Levine, founder and CEO of Solo.io. “This was the vision behind SuperGloo — to create an abstraction layer for consistency across different meshes, which led us to the release of Service Mesh Hub last week. We are excited to see service mesh adoption evolve into an industry-level initiative with the SMI specification.”

For the time being, the interoperability features focus on traffic policy, telemetry and traffic management. Monroy argues that these are the most pressing problems right now. He also stressed that this common interface still allows the different service mesh tools to innovate and that developers can always work directly with their APIs when needed. He also stressed that the Service Mesh Interface (SMI), as this new specification is called, does not provide any of its own implementations of these features. It only defines a common set of APIs.

Currently, the most well-known service mesh is probably Istio, which Google, IBM and Lyft launched about two years ago. SMI may just bring a bit more competition to this market since it will allow developers to bet on the overall idea of a service mesh instead of a specific implementation.

In addition to SMI, Microsoft also today announced a couple of other updates around its cloud-native and Kubernetes services. It announced the first alpha of the Helm 3 package manager, for example, as well as the 1.0 release of its Kubernetes extension for Visual Studio Code and the general availability of its AKS virtual nodes, using the open source Virtual Kubelet project.

Apr
05
2019
--

Peter Kraus dishes on the market

During my recent conversation with Peter Kraus, which was supposed to be focused on Aperture and its launch of the Aperture New World Opportunities Fund, I couldn’t help veering off into tangents about the market in general. Below is Kraus’ take on the availability of alpha generation, the Fed, inflation versus Amazon, housing, the cross-ownership of U.S. equities by a few huge funds and high-frequency trading.

Gregg Schoenberg: Will alpha be more available over the next five years than it has been over the last five?

To think that at some point equities won’t become more volatile and decline 20% to 30%… I think it’s crazy.

Peter Kraus: Do I think it’s more available in the next five years than it was in the last five years? No. Do I think people will pay more attention to it? Yes, because when markets are up to 30 percent, if you get another five, it doesn’t matter. When markets are down 30 percent and I save you five by being 25 percent down, you care.

GS: Is the Fed’s next move up or down?

PK: I think the Fed does zero, nothing. In terms of its next interest rate move, in my judgment, there’s a higher probability that it’s down versus up.

Aug
29
2018
--

Storage provider Cloudian raises $94M

Cloudian, a company that specializes in helping businesses store petabytes of data, today announced that it has raised a $94 million Series E funding round. Investors in this round, which is one of the largest we have seen for a storage vendor, include Digital Alpha, Fidelity Eight Roads, Goldman Sachs, INCJ, JPIC (Japan Post Investment Corporation), NTT DOCOMO Ventures and WS Investments. This round includes a $25 million investment from Digital Alpha, which was first announced earlier this year.

With this, the seven-year-old company has now raised a total of $174 million.

As the company told me, it now has about 160 employees and 240 enterprise customers. Cloudian has found its sweet spot in managing the large video archives of entertainment companies, but its customers also include healthcare companies, automobile manufacturers and Formula One teams.

What’s important to stress here is that Cloudian’s focus is on on-premise storage, not cloud storage, though it does offer support for multi-cloud data management, as well. “Data tends to be most effectively used close to where it is created and close to where it’s being used,” Cloudian VP of worldwide sales Jon Ash told me. “That’s because of latency, because of network traffic. You can almost always get better performance, better control over your data if it is being stored close to where it’s being used.” He also noted that it’s often costly and complex to move that data elsewhere, especially when you’re talking about the large amounts of information that Cloudian’s customers need to manage.

Unsurprisingly, companies that have this much data now want to use it for machine learning, too, so Cloudian is starting to get into this space, as well. As Cloudian CEO and co-founder Michael Tso also told me, companies are now aware that the data they pull in, whether from IoT sensors, cameras or medical imaging devices, will only become more valuable over time as they try to train their models. If they decide to throw the data away, they run the risk of having nothing with which to train their models.

Cloudian plans to use the new funding to expand its global sales and marketing efforts and increase its engineering team. “We have to invest in engineering and our core technology, as well,” Tso noted. “We have to innovate in new areas like AI.”

As Ash also stressed, Cloudian’s business is really data management — not just storage. “Data is coming from everywhere and it’s going everywhere,” he said. “The old-school storage platforms that were siloed just don’t work anywhere.”

Apr
02
2013
--

Percona XtraBackup 2.1.0 for MySQL alpha now available

Percona XtraBackupPercona is glad to announce the release of Percona XtraBackup 2.1.0 for MySQL alpha on April 2, 2013. Downloads are available from our download site here. For this ALPHA release, we will not be making APT and YUM repositories available, just base deb and RPM packages

This is an ALPHA quality release and is not intended for production. If you want a high quality, generally available release, the current stable version should be used (currently 2.0.6 in the 2.0 series at the time of writing).

This release contains all of the features and bug fixes in Percona XtraBackup for MySQL 2.0.6, plus the following:

New Features

  • Percona XtraBackup for MySQL now has support for Compact Backups. This feature can be used for taking the backups that will take less amount of disk space.
  • Percona XtraBackup for MySQL has implemented Encrypted Backups. This feature can be used to encrypt/decrypt both local and streamed backups in order to add another layer of protection to the backups.
  • innobackupex now uses Perl's DBD::MySQL package for server communication instead of spawning the MySQL command line client.
  • Support for InnoDB 5.0 and InnoDB 5.1-builtin has been removed from Percona XtraBackup for MySQL.
  • After being deprecated in previous version, option –remote-host has been completely removed in Percona XtraBackup for MySQL 2.1.

Bugs Fixed:

  • innobackupex now supports empty arguments in the --password option. Bug fixed #1032667 (Andrew Gaul).

Release notes with all the bugfixes for Percona XtraBackup for MySQL 2.1.0-alpha1are available in our online documentation. Bugs can be reported on the launchpad bug tracker.

The post Percona XtraBackup 2.1.0 for MySQL alpha now available appeared first on MySQL Performance Blog.

Feb
15
2013
--

Christine Rains: The 13th Floor novellas

TheDragonslayercoverthumbPlease give a warm welcome to Christine Rains, author of the paranormal series The 13th Floor (among others). I’ve read the first two in the series: The Marquis and The Alpha and they’re super reads. Christine writes about very compelling, charismatic characters. The Dragonslayer is out now, and The Harbinger coming March 13th.

Graeme: How and why did you become an author?

CR: I’ve always been a writer. Ever since I could print, I wrote stories. I write because I love it. It took a long time to gain the confidence to publish, though. Perhaps age has toughened up my skin.

Graeme: Which of your books is your favourite, and why?

CR: Oh, this is tough. I love my concept for the 13th Floor series. I’m currently writing it, so I’m in the “in love” stage with it. Yet for my favorite, I have to say my Magena Silver trilogy. Magena is a unique protagonist. She challenges me as a writer. I love the cast of characters in the trilogy.

Graeme: Which of the 13th Floor occupants is your favourite, and why?

CR: Harriet the banshee. Did I answer that too quickly? I hope the others aren’t jealous. Harriet is a compassionate soul that was cursed and has to live a double life. She’s in love with a vampire, and the only time they see each other is when she’s in her hideous old hag form. I love her internal conflict and how that along with her curse makes her interact with the world.

Graeme: Which authors/books inspire you the most?

CR: Stephen King has been a great inspiration for me since I was a teenager. His worldbuilding and characters are phenomenal. I also love George R.R. Martin, J.K. Rowling, and Karen Marie Moning.

Graeme: Tell us some random thing about you that your readers might not know.

CR: I have lots of freckles and I hate them.

Graeme: Describe your writing environment.

CR: I have a little room stuffed full of books and games. The walls are a calming moss green and there’s paintings by my son on the walls. The desk is old and simple, but it’s still holding up. I always have a thesaurus beside me, and if I’m lucky, a sweet snack.

Graeme: Are you a plotter or a “pantser”?

CR: Pantser. I’ve tried to outline and become more of a plotter. Organization would save me time and I need that, but I’ve never been able to stick to an outline. My stories take a life of their own and go where they want. Luckily for me, it has worked out most of the time!

Graeme: Who would you most like to meet and what would you say to/ask them?

CR: I always have trouble with this question. I’d like to meet a lot of people, but I’m very shy. One day, I’d like to take a car trip (for a book tour, if I’m lucky!) across the country from one coast to the other and meet all my online friends.

Graeme: What’s next once you’ve finished with the 13th Floor? Tease us…

CR: I’m not sure which project I’ll pick up next. I have a couple of paranormal romance manuscripts I want to polish and query later this year. Possibly one involving a sexy witch who’s an expert at brewing love potions or a reclusive psychic that has fallen in love with a ghost.

Graeme: If you could live anywhere in the world, where and why?

CR: A log cabin in the woods by the Rocky Mountains. A cozy little home with not much upkeep. That way I’ll have time to write and travel.

Graeme: Awesome! Thanks sor answering those, and I can’t wait to see what you come up with next. Folks, check out Christine’s 13th Floor books below. Well worth a read.


Christine Rains

ChristineauthorsmallChristine Rains is a writer, blogger, and geek mom. She has four degrees which help nothing with motherhood but make her a great Jeopardy player. When she’s not reading or writing, she’s going on adventures with her son or watching cheesy movies on Syfy Channel. Christine is a member of Untethered Realms and S.C.I.F.I. She has twenty short stories and five novellas published.

Contact Christine via her Website, Facebook, Goodreads or Twitter as @CRainsWriter


Firstthreenovellas13thFloorsmall


TheDragonslayercoversmallOn the rooftop of neighboring building, dragonslayer Xanthus Ehrensvard fires at his target, Governor Whittaker. How he missed the shot, he doesn’t know, but fleeing the scene, he picks up an unwanted passenger. Gorgeous reporter Lois King saw Xan’s face, and she believes it’s the story to make her career. Except he can’t let her walk away knowing what he looks like. Xan has to show her the Governor is a bigger threat to the world than he is.??

Xan knows dragons never went extinct. They evolved with human society, taking on mortal forms, and slithered their way into positions of great influence and power, just like the Governor. But it’s no easy chore proving to someone that dragons still exist, and even more so, they’re disguised as famous people. Xan must convince Lois or find another way to silence her. An option, as he gets to know her, he likes less and less.

??After all, dragonslayers are no longer celebrated heroes but outlaws. Just as the dragons wish it. But this outlaw must make a plan to slay the dragon or risk its retribution.

Amazon | B&N | Kobo | Smashwords | Goodreads

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com