Aug
22
2019
--

Oracle directors give blessing to shareholder lawsuit against Larry Ellison and Safra Catz

Three years after closing a $9.3 billion deal to acquire NetSuite, several Oracle board members have written an extraordinary letter to the Delaware Court, approving a shareholder lawsuit against company executives Larry Ellison and Safra Catz over the 2016 deal. Reuters broke this story.

According to Reuters’ Alison Frankel, three board members, including former U.S. Defense Secretary Leon Panetta, sent a letter on August 15th to Sam Glasscock III, vice chancellor for the Court of the Chancery in Georgetown, Delaware, approving the suit as members of a special board of directors entity known as the Special Litigation Committee.

The lawsuit is what is called in legal parlance a derivative suit. According to the site Justia, this type of suit is filed in cases like this. “Since shareholders are generally allowed to file a lawsuit in the event that a corporation has refused to file one on its own behalf, many derivative suits are brought against a particular officer or director of the corporation for breach of contract or breach of fiduciary duty,” the Justia site explained.

The letter went on to say there was an attempt to settle this suit, which was originally launched in 2017, through negotiation outside of court, but when that attempt failed, the directors wrote this letter to the court stating that the suit should be allowed to proceed.

As Frankel wrote in her article, the lawsuit, which was originally filed by the Firemen’s Retirement System of St. Louis, could be worth billions:

One of the lead lawyers for the Firemen’s fund, Joel Friedlander of Friedlander & Gorris, said at a hearing in June that shareholders believe the breach-of-duty claims against Oracle and NetSuite executives are worth billions of dollars. So in last week’s letter, Oracle’s board effectively unleashed plaintiffs’ lawyers to seek ten-figure damages against its own members.

It’s worth pointing out, as we reported at the time of the NetSuite acquisition, that Larry Ellison was involved in setting up NetSuite in the late 1990s and was a major shareholder at the time of the deal.

Oracle was struggling to find its cloud footing in 2016, and it was believed that by buying an established SaaS player like NetSuite, it could begin to build out its cloud business much faster than trying to develop something like it internally. A June Synergy Research SaaS marketshare report, while admitting the market was fragmented, still showed Oracle was far behind the pack in spite of that deal three years ago.

SaaS Q119 1

While there have been bigger deals in tech M&A history, including Salesforce’s acquisition of Tableau for $15.7 billion earlier this year, it’s still stands with some of the largest.

We reached out to Oracle regarding this story, but it declined to comment.

 

 

Aug
22
2019
--

Enterprise software is hot — who would have thought?

Once considered the most boring of topics, enterprise software is now getting infused with such energy that it is arguably the hottest space in tech.

It’s been a long time coming. And it is the developers, software engineers and veteran technologists with deep experience building at-scale technologies who are energizing enterprise software. They have learned to build resilient and secure applications with open-source components through continuous delivery practices that align technical requirements with customer needs. And now they are developing application architectures and tools for at-scale development and management for enterprises to make the same transformation.

“Enterprise had become a dirty word, but there’s a resurgence going on and Enterprise doesn’t just mean big and slow anymore,” said JD Trask, co-founder of Raygun enterprise monitoring software. “I view the modern enterprise as one that expects their software to be as good as consumer software. Fast. Easy to use. Delivers value.”

The shift to scale out computing and the rise of the container ecosystem, driven largely by startups, is disrupting the entire stack, notes Andrew Randall, vice president of business development at Kinvolk.

In advance of TechCrunch’s first enterprise-focused event, TC Sessions: Enterprise, The New Stack examined the commonalities between the numerous enterprise-focused companies who sponsor us. Their experiences help illustrate the forces at play behind the creation of the modern enterprise tech stack. In every case, the founders and CTOs recognize the need for speed and agility, with the ultimate goal of producing software that’s uniquely in line with customer needs.

We’ll explore these topics in more depth at The New Stack pancake breakfast and podcast recording at TC Sessions: Enterprise. Starting at 7:45 a.m. on Sept. 5, we’ll be serving breakfast and hosting a panel discussion on “The People and Technology You Need to Build a Modern Enterprise,” with Sid Sijbrandij, founder and CEO, GitLab, and Frederic Lardinois, enterprise writer and editor, TechCrunch, among others. Questions from the audience are encouraged and rewarded, with a raffle prize awarded at the end.

Traditional virtual machine infrastructure was originally designed to help manage server sprawl for systems-of-record software — not to scale out across a fabric of distributed nodes. The disruptors transforming the historical technology stack view the application, not the hardware, as the main focus of attention. Companies in The New Stack’s sponsor network provide examples of the shift toward software that they aim to inspire in their enterprise customers. Portworx provides persistent state for containers; NS1 offers a DNS platform that orchestrates the delivery internet and enterprise applications; Lightbend combines the scalability and resilience of microservices architecture with the real-time value of streaming data.

“Application development and delivery have changed. Organizations across all industry verticals are looking to leverage new technologies, vendors and topologies in search of better performance, reliability and time to market,” said Kris Beevers, CEO of NS1. “For many, this means embracing the benefits of agile development in multicloud environments or building edge networks to drive maximum velocity.”

Enterprise software startups are delivering that value, while they embody the practices that help them deliver it.

The secrets to speed, agility and customer focus

Speed matters, but only if the end result aligns with customer needs. Faster time to market is often cited as the main driver behind digital transformation in the enterprise. But speed must also be matched by agility and the ability to adapt to customer needs. That means embracing continuous delivery, which Martin Fowler describes as the process that allows for the ability to put software into production at any time, with the workflows and the pipeline to support it.

Continuous delivery (CD) makes it possible to develop software that can adapt quickly, meet customer demands and provide a level of satisfaction with benefits that enhance the value of the business and the overall brand. CD has become a major category in cloud-native technologies, with companies such as CircleCI, CloudBees, Harness and Semaphore all finding their own ways to approach the problems enterprises face as they often struggle with the shift.

“The best-equipped enterprises are those [that] realize that the speed and quality of their software output are integral to their bottom line,” Rob Zuber, CTO of CircleCI, said.

Speed is also in large part why monitoring and observability have held their value and continue to be part of the larger dimension of at-scale application development, delivery and management. Better data collection and analysis, assisted by machine learning and artificial intelligence, allow companies to quickly troubleshoot and respond to customer needs with reduced downtime and tight DevOps feedback loops. Companies in our sponsor network that fit in this space include Raygun for error detection; Humio, which provides observability capabilities; InfluxData with its time-series data platform for monitoring; Epsagon, the monitoring platform for serverless architectures and Tricentis for software testing.

“Customer focus has always been a priority, but the ability to deliver an exceptional experience will now make or break a “modern enterprise,” said Wolfgang Platz, founder of Tricentis, which makes automated software testing tools. “It’s absolutely essential that you’re highly responsive to the user base, constantly engaging with them to add greater value. This close and constant collaboration has always been central to longevity, but now it’s a matter of survival.”

DevOps is a bit overplayed, but it still is the mainstay workflow for cloud-native technologies and critical to achieving engineering speed and agility in a decoupled, cloud-native architecture. However, DevOps is also undergoing its own transformation, buoyed by the increasing automation and transparency allowed through the rise of declarative infrastructure, microservices and serverless technologies. This is cloud-native DevOps. Not a tool or a new methodology, but an evolution of the longstanding practices that further align developers and operations teams — but now also expanding to include security teams (DevSecOps), business teams (BizDevOps) and networking (NetDevOps).

“We are in this constant feedback loop with our customers where, while helping them in their digital transformation journey, we learn a lot and we apply these learnings for our own digital transformation journey,” Francois Dechery, chief strategy officer and co-founder of CloudBees, said. “It includes finding the right balance between developer freedom and risk management. It requires the creation of what we call a continuous everything culture.”

Leveraging open-source components is also core in achieving speed for engineering. Open-source use allows engineering teams to focus on building code that creates or supports the core business value. Startups in this space include Tidelift and open-source security companies such as Capsule8. Organizations in our sponsor portfolio that play roles in the development of at-scale technologies include The Linux Foundation, the Cloud Native Computing Foundation and the Cloud Foundry Foundation.

“Modern enterprises … think critically about what they should be building themselves and what they should be sourcing from somewhere else,” said Chip Childers, CTO of Cloud Foundry Foundation . “Talented engineers are one of the most valuable assets a company can apply to being competitive, and ensuring they have the freedom to focus on differentiation is super important.”

You need great engineering talent, giving them the ability to build secure and reliable systems at scale while also the trust in providing direct access to hardware as a differentiator.

Is the enterprise really ready?

The bleeding edge can bleed too much for the likings of enterprise customers, said James Ford, an analyst and consultant.

“It’s tempting to live by mantras like ‘wow the customer,’ ‘never do what customers want (instead build innovative solutions that solve their need),’ ‘reduce to the max,’ … and many more,” said Bernd Greifeneder, CTO and co-founder of Dynatrace . “But at the end of the day, the point is that technology is here to help with smart answers … so it’s important to marry technical expertise with enterprise customer need, and vice versa.”

How the enterprise adopts new ways of working will affect how startups ultimately fare. The container hype has cooled a bit and technologists have more solid viewpoints about how to build out architecture.

One notable trend to watch: The role of cloud services through projects such as Firecracker. AWS Lambda is built on Firecracker, the open-source virtualization technology, built originally at Amazon Web Services . Firecracker serves as a way to get the speed and density that comes with containers and the hardware isolation and security capabilities that virtualization offers. Startups such as Weaveworks have developed a platform on Firecracker. OpenStack’s Kata containers also use Firecracker.

“Firecracker makes it easier for the enterprise to have secure code,” Ford said. It reduces the surface security issues. “With its minimal footprint, the user has control. It means less features that are misconfigured, which is a major security vulnerability.”

Enterprise startups are hot. How they succeed will determine how well they may provide a uniqueness in the face of the ever-consuming cloud services and at-scale startups that inevitably launch their own services. The answer may be in the middle with purpose-built architectures that use open-source components such as Firecracker to provide the capabilities of containers and the hardware isolation that comes with virtualization.

Hope to see you at TC Sessions: Enterprise. Get there early. We’ll be serving pancakes to start the day. As we like to say, “Come have a short stack with The New Stack!”

Aug
22
2019
--

Remediant lands $15M Series A to disrupt privileged access security

Remediant, a startup that helps companies secure privileged access in a modern context, today announced a $15 million Series A led by Dell Technologies Capital and ForgePoint Capital.

Remediant’s co-founders, Paul Lanzi and Tim Keeler, worked in biotech for years and saw a problem first-hand with the way companies secured privileged access. It was granted to certain individuals in the organization carte blanche, and they believed if you could limit access, it would make the space more secure and less vulnerable to hackers.

Lanzi says they started the company with two core concepts. “The first concept is the ability to assess or detect all of the places where privileged accounts exist and what systems they have access to. The second concept is to strip away all of the privileged access from all of those accounts and grant it back on a just-in-time basis,” Lanzi explained.

If you’re thinking that could get in the way of people who need access to do their jobs, as former IT admins, they considered that. Remediant is based on a Zero Trust model where you have to prove you have the right to access the privileged area. But they do provide a reasonable baseline amount of time for users who need it within the confines of continuously enforcing access.

“Continuous enforcement is part of what we do, so by default we grant you four hours of access when you need that access, and then after that four hours, even if you forget to come back and end your session, we will automatically revoke that access. In that way all of the systems that are protected by SecureOne (the company’s flagship product) are held in this Zero Trust state where no one has access to them on a day-to-day basis,” Lanzi said.

Remediant SecureONE Dashboard

Remediant SecureONE Dashboard (Screenshot: Remediant)

The company has bootstrapped until now, and has actually been profitable, something that’s unusual for a startup at this stage of development, but Lanzi says they decided to take an investment in order to shift gears and concentrate on growth and product expansion.

Deepak Jeevankumar, managing director at investor Dell Technologies Capital, says it’s not easy for security startups to rise above the noise, but he saw something in Remediant’s founders. “Tim and Paul came from the practitioner’s viewpoint. They knew the actual problems that people face in terms of privileged access. So they had a very strong empathy towards the customer’s problem because they lived through it,” Jeevankumar told TechCrunch.

He added that the privileged access market hasn’t really been updated in two decades. “It’s a market ripe for disruption. They are combining the just-in-time philosophy with the Zero Trust philosophy, and are bringing that to the crown jewel of administrative access,” he said.

The company’s tools are installed on the customer’s infrastructure, either on-prem or in the cloud. They don’t have a pure cloud product at the moment, but they have plans for a SaaS version down the road to help small and medium-sized businesses solve the privileged access problem.

Lanzi says they are also looking to expand the product line in other ways with this investment. “The basic philosophies that underpin our technology are broadly applicable. We want to start applying our technology in those other areas as well. So as we think toward a future that looks more like cloud and more like DevOps, we want to be able to add more of those features to our products,” he said.

Aug
22
2019
--

NASA’s new HPE-built supercomputer will prepare for landing Artemis astronauts on the Moon

NASA and Hewlett Packard Enterprise (HPE) have teamed up to build a new supercomputer, which will serve NASA’s Ames Research Center in California and develop models and simulations of the landing process for Artemis Moon missions.

The new supercomputer is called “Aitken,” named after American astronomer Robert Grant Aitken, and it can run simulations at up to 3.69 petaFLOPs of theoretical performance power. Aitken is custom-designed by HPE and NASA to work with the Ames modular data center, which is a project it undertook starting in 2017 to massively reduce the amount of water and energy used in cooling its supercomputing hardware.

Aitken employs second-generation Intel Xeon processors, Mellanox InfiniBand high-speed networking, and has 221 TB of memory on board for storage. It’s the result of four years of collaboration between NASA and HPE, and it will model different methods of entry, descent and landing for Moon-destined Artemis spacecraft, running simulations to determine possible outcomes and help determine the best, safest approach.

This isn’t the only collaboration between HPE and NASA: The enterprise computer maker built for the agency a new kind of supercomputer able to withstand the rigors of space, and sent it up to the ISS in 2017 for preparatory testing ahead of potential use on longer missions, including Mars. The two partners then opened that supercomputer for use in third-party experiments last year.

HPE also announced earlier this year that it was buying supercomputer company Cray for $1.3 billion. Cray is another long-time partner of NASA’s supercomputing efforts, dating back to the space agency’s establishment of a dedicated computational modeling division and the establishing of its Central Computing Facility at Ames Research Center.

Aug
21
2019
--

Splunk acquires cloud monitoring service SignalFx for $1.05B

Splunk, the publicly traded data processing and analytics company, today announced that it has acquired SignalFx for a total price of about $1.05 billion. Approximately 60% of this will be in cash and 40% in Splunk common stock. The companies expect the acquisition to close in the second half of 2020.

SignalFx, which emerged from stealth in 2015, provides real-time cloud monitoring solutions, predictive analytics and more. Upon close, Splunk argues, this acquisition will allow it to become a leader “in observability and APM for organizations at every stage of their cloud journey, from cloud-native apps to homegrown on-premises applications.”

Indeed, the acquisition will likely make Splunk a far stronger player in the cloud space as it expands its support for cloud-native applications and the modern infrastructures and architectures those rely on.

2019 08 21 1332

Ahead of the acquisition, SignalFx had raised a total of $178.5 million, according to Crunchbase, including a recent Series E round. Investors include General Catalyst, Tiger Global Management, Andreessen Horowitz and CRV. Its customers include the likes of AthenaHealth, Change.org, Kayak, NBCUniversal and Yelp.

“Data fuels the modern business, and the acquisition of SignalFx squarely puts Splunk in position as a leader in monitoring and observability at massive scale,” said Doug Merritt, president and CEO, Splunk, in today’s announcement. “SignalFx will support our continued commitment to giving customers one platform that can monitor the entire enterprise application lifecycle. We are also incredibly impressed by the SignalFx team and leadership, whose expertise and professionalism are a strong addition to the Splunk family.”

Aug
21
2019
--

Box introduces Box Shield with increased security controls and threat protection

Box has always had to balance the idea of sharing content broadly while protecting it as it moved through the world, but the more you share, the more likely something can go wrong, such as misconfigured shared links that surfaced earlier this year. In an effort to make the system more secure, the company announced Box Shield today in Beta, a set of tools to help employees sharing Box content better understand who they are sharing with, while helping the security team see when content is being misused.

Link sharing is a natural part of what companies do with Box, and as Chief Product and Chief Strategy Officer Jeetu Patel says, you don’t want to change the way people use Box. Instead, he says it’s his job to make it easier to make it secure and that is the goal with today’s announcement.

“We’ve introduced Box Shield, which embeds these content controls and protects the content in a way that doesn’t compromise user experience, while ensuring safety for the administrator and the company, so their intellectual property is protected,” Patel explained.

He says this involves two components. The first is about raising user awareness and helping them understand what they’re sharing. In fact, sometimes companies use Box as a content management backend to distribute files like documentation on the internet on purpose. They want them to be indexed in Google. Other times, however, it’s through misuse of the file-sharing component, and Box wants to fix that with this release by making it clear who they are sharing with and what that means.

They’ve updated the experience on the web and mobile products to make it much clearer through messaging and interface design what the sharing level they have chosen means. Of course, some users will ignore all these messages, so there is a second component to give administrators more control.

2. Box Shield Smart Access

Box Shield access controls (Photo: Box)

This involves helping customers build guardrails into the product to prevent leakage of an entire category of documents that you would never want leaked, like internal business plans, salary lists or financial documents, or even to granularly protect particular files or folders. “The second thing we’re trying to do is make sure that Box itself has some built-in security guardrails and boundary conditions that can help people reduce the risk around employee negligence or inadvertent disclosures, and then make sure that you have some very precision-based, granular security controls that can be applied to classifications that you’ve set on content,” he explained.

In addition, the company wants to help customers detect when employees are abusing content, perhaps sharing sensitive data like customer lists with a personal account, and flag these for the security team. This involves flagging anomalous downloads, suspicious sessions or unusual locations inside Box.

The tool also can work with existing security products already in place, so that whatever classification has been applied in Box travels with a file, and anomalies or misuse can be captured by the company’s security apparatus before the file leaves the company’s boundaries.

While Patel acknowledges there is no way to prevent user misuse or abuse in all cases, by implementing Box Shield, the company is attempting to provide customers with a set of tools to help them reduce the possibility of it going undetected. Box Shield is in private beta today and will be released in the fall.

Aug
20
2019
--

IBM is moving OpenPower Foundation to The Linux Foundation

IBM makes the Power Series chips, and as part of that has open-sourced some of the underlying technologies to encourage wider use of these chips. The open-source pieces have been part of the OpenPower Foundation. Today, the company announced it was moving the foundation under The Linux Foundation, and while it was at it, announced it was open-sourcing several other important bits.

Ken King, general manager for OpenPower at IBM, says that at this point in his organization’s evolution, they wanted to move it under the auspices of the Linux Foundation . “We are taking the OpenPower Foundation, and we are putting it as an entity or project underneath The Linux Foundation with the mindset that we are now bringing more of an open governance approach and open governance principles to the foundation,” King told TechCrunch.

But IBM didn’t stop there. It also announced that it was open-sourcing some of the technical underpinnings of the Power Series chip to make it easier for developers and engineers to build on top of the technology. Perhaps most importantly, the company is open-sourcing the Power Instruction Set Architecture (ISA). These are “the definitions developers use for ensuring hardware and software work together on Power,” the company explained.

King sees open-sourcing this technology as an important step for a number of reasons around licensing and governance. “The first thing is that we are taking the ability to be able to implement what we’re licensing, the ISA instruction set architecture, for others to be able to implement on top of that instruction set royalty free with patent rights,” he explained.

The company is also putting this under an open governance workgroup at the OpenPower Foundation. This matters to open-source community members because it provides a layer of transparency that might otherwise be lacking. What that means in practice is that any changes will be subject to a majority vote, so long as the changes meet compatibility requirements, King said.

Jim Zemlin, executive director at the Linux Foundation, says that making all of this part of the Linux Foundation open-source community could drive more innovation. “Instead of a very, very long cycle of building an application and working separately with hardware and chip designers, because all of this is open, you’re able to quickly build your application, prototype it with hardware folks, and then work with a service provider or a company like IBM to take it to market. So there’s not tons of layers in between the actual innovation and value captured by industry in that cycle,” Zemlin explained.

In addition, IBM made several other announcements around open-sourcing other Power Chip technologies designed to help developers and engineers customize and control their implementations of Power chip technology. “IBM will also contribute multiple other technologies including a softcore implementation of the Power ISA, as well as reference designs for the architecture-agnostic Open Coherent Accelerator Processor Interface (OpenCAPI) and the Open Memory Interface (OMI). The OpenCAPI and OMI technologies help maximize memory bandwidth between processors and attached devices, critical to overcoming performance bottlenecks for emerging workloads like AI,” the company said in a statement.

The softcore implementation of the Power ISA, in particular, should give developers more control and even enable them to build their own instruction sets, Hugh Blemings, executive director of the OpenPower Foundation explained. “They can now actually try crafting their own instruction sets, and try out new ways of the accelerated data processes and so forth at a lower level than previously possible,” he said.

The company is announcing all of this today at the The Linux Foundation Open Source Summit and OpenPower Summit in San Diego.

Aug
20
2019
--

Percona Server for MySQL 5.6.45-86.1 Now Available

Percona Server for MySQL 5.6.45-86.1

Percona Server for MySQL 5.6.45-86.1Percona announces the release of Percona Server for MySQL 5.6.45-86.1 on August 20, 2019. Download the latest version from the Percona web site or the Percona Software Repositories. You can also run Docker containers from the images in the Docker Hub repository.

Based on MySQL 5.6.45, and including all the bug fixes in it, Percona Server for MySQL 5.6.45-86.1 is the current GA release in the Percona Server for MySQL 5.6 series. Percona Server for MySQL is open-source and free – this is the latest release of our enhanced, drop-in replacement for MySQL.

Bugs Fixed:

  • The TokuDB hot backup library continually dumps TRACE information to the server error log. The user cannot enable or disable the dump of this information. Bug fixed #4850.
  • The TokuDBBackupPlugin is optional at cmake time. Bug fixed #5748.

Other bugs fixed: #5531, #5146 #5638, #5645, #5669, #5749 #5752, #5780, #5833, #5725 #5742, #5743, and #5746.

Release notes are available in the online documentation. Please report any bugs on the JIRA bug tracker.

Aug
20
2019
--

Upcoming Webinar 8/22: Advanced Techniques to Profile and Visualize with Flame Graphs

Webinar Flamegraphs

Please join Percona Principal Support Engineer Marcos Albe as he presents his talk “Flame Graphs 201″ on Thursday, August 22th, 2019 at 11:00 AM PDT (UTC-7).

Register Now

Visualizing profiling information can be a very powerful tool for performance diagnostics, especially if we zoom right into the problem. Flame graphs were developed for this purpose and we use them on a daily basis at Percona to successfully solve complex performance issues. In this presentation, attendees will learn advanced techniques for profiling with performance and visualizing those profiles with flame graphs, as well as other assorted tricks that the flame graph scripts allow us to do.

If you can’t attend, sign up anyways we’ll send you the slides and recording afterward.

Aug
20
2019
--

H2O.ai announces $72.5M Series D led by Goldman Sachs

H2O.ai‘s mission is to democratize AI by providing a set of tools that frees companies from relying on teams of data scientists. Today it got a bushel of money to help. The company announced a $72.5 million Series D round led by Goldman Sachs and Ping An Global Voyager Fund.

Previous investors Wells Fargo, Nvidia and Nexus Venture Partners also participated. Under the terms of the deal, Jade Mandel from Goldman Sachs will be joining the H2O.ai board. Today’s investment brings the total raised to $147 million.

It’s worth noting that Goldman Sachs isn’t just an investor. It’s also a customer. Company CEO and co-founder Sri Ambati says the fact that customers Wells Fargo and Goldman Sachs have led the last two rounds is a validation for him and his company. “Customers have risen up from the ranks for two consecutive rounds for us. Last time the Series C was led by Wells Fargo where we were their platform of choice. Today’s round was led by Goldman Sachs, which has been a strong customer for us and strong supporters of our technology,” Ambati told TechCrunch.

The company’s main product, H2O Driverless AI, introduced in 2017, gets its name from the fact it provides a way for people who aren’t AI experts to still take advantage of AI without a team of data scientists. “Driverless AI is automatic machine learning, which brings the power of a world-class data scientists in the hands of everyone. lt builds models automatically using machine learning algorithms of every kind,” Ambati explained.

They introduced a new recipe concept today, which provides all of the AI ingredients and instructions for building models for different business requirements. H2O.ai’s team of data scientists has created and open-sourced 100 recipes for things like credit risk scoring, anomaly detection and property valuation.

The company has been growing since its Series C round in 2017, when it had 70 employees. Today it has 175 and has tripled the number of customers since the prior round, although Ambati didn’t discuss an exact number. The company has its roots in open source and has 20,000 users of its open-source products, according to Ambati.

He didn’t want to discuss valuation and wouldn’t say when the company might go public, saying it’s early days for AI and they are working hard to build a company for the long haul.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com