Apr
16
2021
--

Data scientists: Bring the narrative to the forefront

By 2025, 463 exabytes of data will be created each day, according to some estimates. (For perspective, one exabyte of storage could hold 50,000 years of DVD-quality video.) It’s now easier than ever to translate physical and digital actions into data, and businesses of all types have raced to amass as much data as possible in order to gain a competitive edge.

However, in our collective infatuation with data (and obtaining more of it), what’s often overlooked is the role that storytelling plays in extracting real value from data.

The reality is that data by itself is insufficient to really influence human behavior. Whether the goal is to improve a business’ bottom line or convince people to stay home amid a pandemic, it’s the narrative that compels action, rather than the numbers alone. As more data is collected and analyzed, communication and storytelling will become even more integral in the data science discipline because of their role in separating the signal from the noise.

Data alone doesn’t spur innovation — rather, it’s data-driven storytelling that helps uncover hidden trends, powers personalization, and streamlines processes.

Yet this can be an area where data scientists struggle. In Anaconda’s 2020 State of Data Science survey of more than 2,300 data scientists, nearly a quarter of respondents said that their data science or machine learning (ML) teams lacked communication skills. This may be one reason why roughly 40% of respondents said they were able to effectively demonstrate business impact “only sometimes” or “almost never.”

The best data practitioners must be as skilled in storytelling as they are in coding and deploying models — and yes, this extends beyond creating visualizations to accompany reports. Here are some recommendations for how data scientists can situate their results within larger contextual narratives.

Make the abstract more tangible

Ever-growing datasets help machine learning models better understand the scope of a problem space, but more data does not necessarily help with human comprehension. Even for the most left-brain of thinkers, it’s not in our nature to understand large abstract numbers or things like marginal improvements in accuracy. This is why it’s important to include points of reference in your storytelling that make data tangible.

For example, throughout the pandemic, we’ve been bombarded with countless statistics around case counts, death rates, positivity rates, and more. While all of this data is important, tools like interactive maps and conversations around reproduction numbers are more effective than massive data dumps in terms of providing context, conveying risk, and, consequently, helping change behaviors as needed. In working with numbers, data practitioners have a responsibility to provide the necessary structure so that the data can be understood by the intended audience.

Apr
15
2021
--

Tecton teams with founder of Feast open source machine learning feature store

Tecton, the company that pioneered the notion of the machine learning feature store, has teamed up with the founder of the open source feature store project called Feast. Today the company announced the release of version 0.10 of the open source tool.

The feature store is a concept that the Tecton founders came up with when they were engineers at Uber. Shortly thereafter an engineer named Willem Pienaar read the founder’s Uber blog posts on building a feature store and went to work building Feast as an open source version of the concept.

“The idea of Tecton [involved bringing] feature stores to the industry, so we build basically the best in class, enterprise feature store. […] Feast is something that Willem created, which I think was inspired by some of the early designs that we published at Uber. And he built Feast and it evolved as kind of like the standard for open source feature stores, and it’s now part of the Linux Foundation,” Tecton co-founder and CEO Mike Del Balso explained.

Tecton later hired Pienaar, who is today an engineer at the company where he leads their open source team. While the company did not originally start off with a plan to build an open source product, the two products are closely aligned, and it made sense to bring Pienaar on board.

“The products are very similar in a lot of ways. So I think there’s a similarity there that makes this somewhat symbiotic, and there is no explicit convergence necessary. The Tecton product is a superset of what Feast has. So it’s an enterprise version with a lot more advanced functionality, but at Feast we have a battle-tested feature store that’s open source,” Pienaar said.

As we wrote in a December 2020 story on the company’s $35 million Series B, it describes a feature store as “an end-to-end machine learning management system that includes the pipelines to transform the data into what are called feature values, then it stores and manages all of that feature data and finally it serves a consistent set of data.”

Del Balso says that from a business perspective, contributing to the open source feature store exposes his company to a different group of users, and the commercial and open source products can feed off one another as they build the two products.

“What we really like, and what we feel is very powerful here, is that we’re deeply in the Feast community and get to learn from all of the interesting use cases […] to improve the Tecton product. And similarly, we can use the feedback that we’re hearing from our enterprise customers to improve the open source project. That’s the kind of cross learning, and ideally that feedback loop involved there,” he said.

The plan is for Tecton to continue being a primary contributor with a team inside Tecton dedicated to working on Feast. Today, the company is releasing version 0.10 of the project.

Apr
15
2021
--

Bigeye (formerly Toro) scores $17M Series A to automate data quality monitoring

As companies create machine learning models, the operations team needs to ensure the data used for the model is of sufficient quality, a process that can be time consuming. Bigeye (formerly Toro), an early stage startup is helping by automating data quality.

Today the company announced a $17 million Series A led Sequoia Capital with participation from existing investor Costanoa Ventures. That brings the total raised to $21 million with the $4 million seed, the startup raised last May.

When we spoke to Bigeye CEO and co-founder Kyle Kirwan last May, he said the seed round was going to be focussed on hiring a team — they are 11 now — and building more automation into the product, and he says they have achieved that goal.

“The product can now automatically tell users what data quality metrics they should collect from their data, so they can point us at a table in Snowflake or Amazon Redshift or whatever and we can analyze that table and recommend the metrics that they should collect from it to monitor the data quality — and we also automated the alerting,” Kirwan explained.

He says that the company is focusing on data operations issues when it comes to inputs to the model such as the table isn’t updating when it’s supposed to, it’s missing rows or there are duplicate entries. They can automate alerts to those kinds of issues and speed up the process of getting model data ready for training and production.

Bogomil Balkansky, the partner at Sequoia who is leading today’s investment sees the company attacking an important part of the machine learning pipeline. “Having spearheaded the data quality team at Uber, Kyle and Egor have a clear vision to provide always-on insight into the quality of data to all businesses,” Balkansky said in a statement.

As the founding team begins building the company, Kirwan says that building a diverse team is a key goal for them and something they are keenly aware of.

“It’s easy to hire a lot of other people that fit a certain mold, and we want to be really careful that we’re doing the extra work to [understand that just because] it’s easy to source people within our network, we need to push and make sure that we’re hiring a team that has different backgrounds and different viewpoints and different types of people on it because that’s how we’re going to build the strongest team,” he said.

Bigeye offers on prem and SaaS solutions, and while it’s working with paying customers like Instacart, Crux Informatics, and Lambda School, the product won’t be generally available until later in the year.

Apr
06
2021
--

Aporia raises $5M for its AI observability platform

Machine learning (ML) models are only as good as the data you feed them. That’s true during training, but also once a model is put in production. In the real world, the data itself can change as new events occur and even small changes to how databases and APIs report and store data could have implications on how the models react. Since ML models will simply give you wrong predictions and not throw an error, it’s imperative that businesses monitor their data pipelines for these systems.

That’s where tools like Aporia come in. The Tel Aviv-based company today announced that it has raised a $5 million seed round for its monitoring platform for ML models. The investors are Vertex Ventures and TLV Partners.

Image Credits: Aporia

Aporia co-founder and CEO Liran Hason, after five years with the Israel Defense Forces, previously worked on the data science team at Adallom, a security company that was acquired by Microsoft in 2015. After the sale, he joined venture firm Vertex Ventures before starting Aporia in late 2019. But it was during his time at Adallom where he first encountered the problems that Aporio is now trying to solve.

“I was responsible for the production architecture of the machine learning models,” he said of his time at the company. “So that’s actually where, for the first time, I got to experience the challenges of getting models to production and all the surprises that you get there.”

The idea behind Aporia, Hason explained, is to make it easier for enterprises to implement machine learning models and leverage the power of AI in a responsible manner.

“AI is a super powerful technology,” he said. “But unlike traditional software, it highly relies on the data. Another unique characteristic of AI, which is very interesting, is that when it fails, it fails silently. You get no exceptions, no errors. That becomes really, really tricky, especially when getting to production, because in training, the data scientists have full control of the data.”

But as Hason noted, a production system may depend on data from a third-party vendor and that vendor may one day change the data schema without telling anybody about it. At that point, a model — say for predicting whether a bank’s customer may default on a loan — can’t be trusted anymore, but it may take weeks or months before anybody notices.

Aporia constantly tracks the statistical behavior of the incoming data and when that drifts too far away from the training set, it will alert its users.

One thing that makes Aporia unique is that it gives its users an almost IFTTT or Zapier-like graphical tool for setting up the logic of these monitors. It comes pre-configured with more than 50 combinations of monitors and provides full visibility in how they work behind the scenes. That, in turn, allows businesses to fine-tune the behavior of these monitors for their own specific business case and model.

Initially, the team thought it could build generic monitoring solutions. But the team realized that this wouldn’t only be a very complex undertaking, but that the data scientists who build the models also know exactly how those models should work and what they need from a monitoring solution.

“Monitoring production workloads is a well-established software engineering practice, and it’s past time for machine learning to be monitored at the same level,” said Rona Segev, founding partner at  TLV Partners. “Aporia‘s team has strong production-engineering experience, which makes their solution stand out as simple, secure and robust.”

 

Mar
17
2021
--

OctoML raises $28M Series B for its machine learning acceleration platform

OctoML, a Seattle-based startup that offers a machine learning acceleration platform built on top of the open-source Apache TVM compiler framework project, today announced that it has raised a $28 million Series B funding round led by Addition. Previous investors Madrona Venture Group and Amplify Partners also participated in this round, which brings the company’s total funding to $47 million. The company last raised in April 2020, when it announced its $15 million Series A round led by Amplify

The promise of OctoML, which was founded by the team that also created TVM, is that developers can bring their models to its platform and the service will automatically optimize that model’s performance for any given cloud or edge device.

As Brazil-born OctoML co-founder and CEO Luis Ceze told me, since raising its Series A round, the company started onboarding some early adopters to its “Octomizer” SaaS platform.

Image Credits: OctoML

“It’s still in early access, but we are we have close to 1,000 early access sign-ups on the waitlist,” Ceze said. “That was a pretty strong signal for us to end up taking this [funding]. The Series B was pre-emptive. We were planning on starting to raise money right about now. We had barely started spending our Series A money — we still had a lot of that left. But since we saw this growth and we had more paying customers than we anticipated, there were a lot of signals like, ‘hey, now we can accelerate the go-to-market machinery, build a customer success team and continue expanding the engineering team to build new features.’ ”

Ceze tells me that the team also saw strong growth signals in the overall community around the TVM project (with about 1,000 people attending its virtual conference last year). As for its customer base (and companies on its waitlist), Ceze says it represents a wide range of verticals that range from defense contractors to financial services and life science companies, automotive firms and startups in a variety of fields.

Recently, OctoML also launched support for the Apple M1 chip — and saw very good performance from that.

The company has also formed partnerships with industry heavyweights like Microsoft (which is also a customer), Qualcomm and AMD to build out the open-source components and optimize its service for an even wider range of models (and larger ones, too).

On the engineering side, Ceze tells me that the team is looking at not just optimizing and tuning models but also the training process. Training ML models can quickly become costly and any service that can speed up that process leads to direct savings for its users — which in turn makes OctoML an easier sell. The plan here, Ceze tells me, is to offer an end-to-end solution where people can optimize their ML training and the resulting models and then push their models out to their preferred platform. Right now, its users still have to take the artifact that the Octomizer creates and deploy that themselves, but deployment support is on OctoML’s roadmap.

“When we first met Luis and the OctoML team, we knew they were poised to transform the way ML teams deploy their machine learning models,” said Lee Fixel, founder of Addition. “They have the vision, the talent and the technology to drive ML transformation across every major enterprise. They launched Octomizer six months ago and it’s already becoming the go-to solution developers and data scientists use to maximize ML model performance. We look forward to supporting the company’s continued growth.”


Early Stage is the premier “how-to” event for startup entrepreneurs and investors. You’ll hear firsthand how some of the most successful founders and VCs build their businesses, raise money and manage their portfolios. We’ll cover every aspect of company building: Fundraising, recruiting, sales, product-market fit, PR, marketing and brand building. Each session also has audience participation built-in — there’s ample time included for audience questions and discussion. Use code “TCARTICLE at checkout to get 20% off tickets right here.

Mar
17
2021
--

New Relic expands its AIOps services

In recent years, the publicly traded observability service New Relic started adding more machine learning-based tools to its platform for AI-assisted incident response when things don’t quite go as planned. Today, it is expanding this feature set with the launch of a number of new capabilities for what it calls its “New Relic Applied Intelligence Service.”

This expansion includes an anomaly detection service that is even available for free users, the ability to group alerts from multiple tools when the models think it’s a single issue that is triggering all of these alerts and new ML-based root cause analysis to help eliminate some of the guesswork when problems occur. Also new (and in public beta) is New Relic’s ability to detect patterns and outliers in log data that is stored in the company’s data platform.

The main idea here, New Relic’s director of product marketing Michael Olson told me, is to make it easier for companies of all sizes to reap the benefits of AI-enhanced ops.

Image Credits: New Relic

“It’s been about a year since we introduced our first set of AIops capabilities with New Relic Applied Intelligence to the market,” he said. “During that time, we’ve seen significant growth in adoption of AIops capabilities through New Relic. But one of the things that we’ve heard from organizations that have yet to foray into adopting AIops capabilities as part of their incident response practice is that they often find that things like steep learning curves and long implementation and training times — and sometimes lack of confidence, or knowledge of AI and machine learning — often stand in the way.”

The new platform should be able to detect emerging problems in real time — without the team having to pre-configure alerts. And when it does so, it’ll smartly group all of the alerts from New Relic and other tools together to cut down on the alert noise and let engineers focus on the incident.

“Instead of an alert storm when a problem occurs across multiple tools, engineers get one actionable issue with alerts automatically grouped based on things like time and frequency, based on the context that they can read in the alert messages. And then now with this launch, we’re also able to look at relationship data across your systems to intelligently group and correlate alerts,” Olson explained.

Image Credits: New Relic

Maybe the highlight for the ops teams that will use these new features, though, is New Relic’s ability to pinpoint the probable root cause of a problem. As Guy Fighel, the general manager of applied intelligence and vice president of product engineering at New Relic, told me, the idea here is not to replace humans but to augment teams.

“We provide a non-black-box experience for teams to craft the decisions and correlation and logic based on their own knowledge and infuse the system with their own knowledge,” Fighel noted. “So you can get very specific based on your environment and needs. And so because of that and because we see a lot of data coming from different tools — all going into New Relic One as the data platform — our probable root cause is very accurate. Having said that, it is still a probable root cause. So although we are opinionated about it, we will never tell you, ‘hey, go fix that, because we’re 100% sure that’s the case.’ You’re the human, you’re in control.”

The AI system also asks users for feedback, so that the model gets refined with every new incident, too.

Fighel tells me that New Relic’s tools rely on a variety of statistical analysis methods and machine learning models. Some of those are unique to individual users while others are used across the company’s user base. He also stressed that all of the engineers who worked on this project have a background in site reliability engineering — so they are intimately familiar with the problems in this space.

With today’s launch, New Relic is also adding a new integration with PagerDuty and other incident management tools so that the state of a given issue can be synchronized bi-directionally between them.

“We want to meet our customers where they are and really be data source agnostic and enable customers to pull in data from any source, where we can then enrich that data, reduce noise and ultimately help our customers solve problems faster,” said Olson.


Early Stage is the premier “how-to” event for startup entrepreneurs and investors. You’ll hear firsthand how some of the most successful founders and VCs build their businesses, raise money and manage their portfolios. We’ll cover every aspect of company building: Fundraising, recruiting, sales, product-market fit, PR, marketing and brand building. Each session also has audience participation built-in — there’s ample time included for audience questions and discussion. Use code “TCARTICLE at checkout to get 20% off tickets right here.

Mar
16
2021
--

A crypto company’s journey to Data 3.0

Data is a gold mine for a company.

If managed well, it provides the clarity and insights that lead to better decision-making at scale, in addition to an important tool to hold everyone accountable.

However, most companies are stuck in Data 1.0, which means they are leveraging data as a manual and reactive service. Some have started moving to Data 2.0, which employs simple automation to improve team productivity. The complexity of crypto data has opened up new opportunities in data, namely to move to the new frontier of Data 3.0, where you can scale value creation through systematic intelligence and automation. This is our journey to Data 3.0.

The complexity of crypto data has opened up new opportunities in data, namely to move to the new frontier of Data 3.0, where you can scale value creation through systematic intelligence and automation.

Coinbase is neither a finance company nor a tech company — it’s a crypto company. This distinction has big implications for how we work with data. As a crypto company, we work with three major types of data (instead of the usual one or two types of data), each of which is complex and varied:

  1. Blockchain: decentralized and publicly available.
  2. Product: large and real-time.
  3. Financial: high-precision and subject to many financial/legal/compliance regulations.

Image Credits: Michael Li/Coinbase

Our focus has been on how we can scale value creation by making this varied data work together, eliminating data silos, solving issues before they start and creating opportunities for Coinbase that wouldn’t exist otherwise.

Having worked at tech companies like LinkedIn and eBay, and also those in the finance sector, including Capital One, I’ve observed firsthand the evolution from Data 1.0 to Data 3.0. In Data 1.0, data is seen as a reactive function providing ad-hoc manual services or firefighting in urgent situations.

Mar
16
2021
--

Noogata raises $12M seed round for its no-code enterprise AI platform

Noogata, a startup that offers a no-code AI solution for enterprises, today announced that it has raised a $12 million seed round led by Team8, with participation from Skylake Capital. The company, which was founded in 2019 and counts Colgate and PepsiCo among its customers, currently focuses on e-commerce, retail and financial services, but it notes that it will use the new funding to power its product development and expand into new industries.

The company’s platform offers a collection of what are essentially pre-built AI building blocks that enterprises can then connect to third-party tools like their data warehouse, Salesforce, Stripe and other data sources. An e-commerce retailer could use this to optimize its pricing, for example, thanks to recommendations from the Noogata platform, while a brick-and-mortar retailer could use it to plan which assortment to allocate to a given location.

Image Credits: Noogata

“We believe data teams are at the epicenter of digital transformation and that to drive impact, they need to be able to unlock the value of data. They need access to relevant, continuous and explainable insights and predictions that are reliable and up-to-date,” said Noogata co-founder and CEO Assaf Egozi. “Noogata unlocks the value of data by providing contextual, business-focused blocks that integrate seamlessly into enterprise data environments to generate actionable insights, predictions and recommendations. This empowers users to go far beyond traditional business intelligence by leveraging AI in their self-serve analytics as well as in their data solutions.”

Image Credits: Noogata

We’ve obviously seen a plethora of startups in this space lately. The proliferation of data — and the advent of data warehousing — means that most businesses now have the fuel to create machine learning-based predictions. What’s often lacking, though, is the talent. There’s still a shortage of data scientists and developers who can build these models from scratch, so it’s no surprise that we’re seeing more startups that are creating no-code/low-code services in this space. The well-funded Abacus.ai, for example, targets about the same market as Noogata.

“Noogata is perfectly positioned to address the significant market need for a best-in-class, no-code data analytics platform to drive decision-making,” writes Team8 managing partner Yuval Shachar. “The innovative platform replaces the need for internal build, which is complex and costly, or the use of out-of-the-box vendor solutions which are limited. The company’s ability to unlock the value of data through AI is a game-changer. Add to that a stellar founding team, and there is no doubt in my mind that Noogata will be enormously successful.”


Early Stage is the premier “how-to” event for startup entrepreneurs and investors. You’ll hear firsthand how some of the most successful founders and VCs build their businesses, raise money and manage their portfolios. We’ll cover every aspect of company building: Fundraising, recruiting, sales, product-market fit, PR, marketing and brand building. Each session also has audience participation built-in — there’s ample time included for audience questions and discussion. Use code “TCARTICLE at checkout to get 20% off tickets right here.

Mar
15
2021
--

DeepSee.ai raises $22.6M Series A for its AI-centric process automation platform

DeepSee.ai, a startup that helps enterprises use AI to automate line-of-business problems, today announced that it has raised a $22.6 million Series A funding round led by led by ForgePoint Capital. Previous investors AllegisCyber Capital and Signal Peak Ventures also participated in this round, which brings the Salt Lake City-based company’s total funding to date to $30.7 million.

The company argues that it offers enterprises a different take on process automation. The industry buzzword these days is “robotic process automation,” but DeepSee.ai argues that what it does is different. I describe its system as “knowledge process automation” (KPA). The company itself defines this as a system that “mines unstructured data, operationalizes AI-powered insights, and automates results into real-time action for the enterprise.” But the company also argues that today’s bots focus on basic task automation that doesn’t offer the kind of deeper insights that sophisticated machine learning models can bring to the table. The company also stresses that it doesn’t aim to replace knowledge workers but helps them leverage AI to turn into actionable insights the plethora of data that businesses now collect.

Image Credits: DeepSee.ai

“Executives are telling me they need business outcomes and not science projects,” writes DeepSee.ai CEO Steve Shillingford. “And today, the burgeoning frustration with most AI-centric deployments in large-scale enterprises is they look great in theory but largely fail in production. We think that’s because right now the current ‘AI approach’ lacks a holistic business context relevance. It’s unthinking, rigid and without the contextual input of subject-matter experts on the ground. We founded DeepSee to bridge the gap between powerful technology and line-of-business, with adaptable solutions that empower our customers to operationalize AI-powered automation — delivering faster, better and cheaper results for our users.”

To help businesses get started with the platform, DeepSee.ai offers three core tools. There’s DeepSee Assembler, which ingests unstructured data and gets it ready for labeling, model review and analysis. Then, DeepSee Atlas can use this data to train AI models that can understand a company’s business processes and help subject-matter experts define templates, rules and logic for automating a company’s internal processes. The third tool, DeepSee Advisor, meanwhile focuses on using text analysis to help companies better understand and evaluate their business processes.

Currently, the company’s focus is on providing these tools for insurance companies, the public sector and capital markets. In the insurance space, use cases include fraud detection, claims prediction and processing, and using large amounts of unstructured data to identify patterns in agent audits, for example.

That’s a relatively limited number of industries for a startup to operate in, but the company says it will use its new funding to accelerate product development and expand to new verticals.

“Using KPA, line-of-business executives can bridge data science and enterprise outcomes, operationalize AI/ML-powered automation at scale, and use predictive insights in real time to grow revenue, reduce cost and mitigate risk,” said Sean Cunningham, managing director of ForgePoint Capital. “As a leading cybersecurity investor, ForgePoint sees the daily security challenges around insider threat, data visibility and compliance. This investment in DeepSee accelerates the ability to reduce risk with business automation and delivers much-needed AI transparency required by customers for implementation.”

Mar
02
2021
--

Microsoft launches Azure Percept, its new hardware and software platform to bring AI to the edge

Microsoft today announced Azure Percept, its new hardware and software platform for bringing more of its Azure AI services to the edge. Percept combines Microsoft’s Azure cloud tools for managing devices and creating AI models with hardware from Microsoft’s device partners. The general idea here is to make it far easier for all kinds of businesses to build and implement AI for things like object detection, anomaly detections, shelf analytics and keyword spotting at the edge by providing them with an end-to-end solution that takes them from building AI models to deploying them on compatible hardware.

To kickstart this, Microsoft also today launches a hardware development kit with an intelligent camera for vision use cases (dubbed Azure Percept Vision). The kit features hardware-enabled AI modules for running models at the edge, but it can also be connected to the cloud. Users will also be able to trial their proofs-of-concept in the real world because the development kit conforms to the widely used 80/20 T-slot framing architecture.

In addition to Percept Vision, Microsoft is also launching Azure Percept Audio for audio-centric use cases.

Azure Percept devices, including Trust Platform Module, Azure Percept Vision and Azure Percept Audio

Azure Percept devices, including Trust Platform Module, Azure Percept Vision and Azure Percept Audio

“We’ve started with the two most common AI workloads, vision and voice, sight and sound, and we’ve given out that blueprint so that manufacturers can take the basics of what we’ve started,” said Roanne Sones, the corporate vice president of Microsoft’s edge and platform group, said. “But they can envision it in any kind of responsible form factor to cover a pattern of the world.”

Percept customers will have access to Azure’s cognitive service and machine learning models and Percept devices will automatically connect to Azure’s IoT hub.

Microsoft says it is working with silicon and equipment manufacturers to build an ecosystem of “intelligent edge devices that are certified to run on the Azure Percept platform.” Over the course of the next few months, Microsoft plans to certify third-party devices for inclusion in this program, which will ideally allow its customers to take their proofs-of-concept and easily deploy them to any certified devices.

“Anybody who builds a prototype using one of our development kits, if they buy a certified device, they don’t have to do any additional work,” said Christa St. Pierre, a product manager in Microsoft’s Azure edge and platform group.

St. Pierre also noted that all of the components of the platform will have to conform to Microsoft’s responsible AI principles — and go through extensive security testing.


Early Stage is the premiere ‘how-to’ event for startup entrepreneurs and investors. You’ll hear firsthand how some of the most successful founders and VCs build their businesses, raise money and manage their portfolios. We’ll cover every aspect of company-building: Fundraising, recruiting, sales, legal, PR, marketing and brand building. Each session also has audience participation built-in — there’s ample time included in each for audience questions and discussion.


Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com