Dec
05
2019
--

Figma launches Auto Layout

Figma, the design tool maker that has raised nearly $83 million from investors such as Index Ventures, Sequoia, Greylock and Kleiner Perkins, has today announced a new feature called Auto Layout that takes some of the tedious reformatting out of the design process.

Designers are all too familiar with the problem of manually sizing content in new components. For example, when a designer creates a new button for a web page, the text within the button has to be manually sized to fit within the button. If the text changes, or the size of the button, everything has to be adjusted accordingly.

This problem is exacerbated when there are many instances of a certain component, all of which have to be manually adjusted.

Auto Layout functions as a toggle. When it’s on, Figma does all the adjusting for designers, making sure content is centered within components and that the components themselves adjust to fit any new content that might be added. When an item within a frame is re-sized or changed, the content around it dynamically adjusts along with it.

Auto Layout also allows users to change the orientation of a list of items from vertical to horizontal and back again, adjust the individual sizing of a component within a list or re-order components in a list with a single click.

It’s a little like designing on auto-pilot.

Auto Layout also functions within the component system, allowing designers to tweak the source of truth without detaching the symbol or content from it, meaning that these changes flow through to the rest of their designs.

Figma CEO Dylan Field said there was very high demand for this feature from customers, and hopes that this will allow design teams to move much faster when it comes to user testing and iterative design.

Alongside the launch, Figma is also announcing that it has brought on its first independent board member. Lynn Vojvodich joins Danny Rimer, John Lilly, Mamoon Hamid and Andrew Reed on the Figma board.

Vojvodich has a wealth of experience as an operator in the tech industry, serving as EVP and CMO at Salesforce.com. She was a partner at Andreesen Horowitz, and led her own company Take3 for 10 years. Vojvodich also serves on the boards of several large corporations, including Ford Motor Company, Looker and Dell.

“I’ve never brought on an investor that I haven’t heavily reference checked, both with companies that have had success and those who don’t,” said Field. “A good board can really help accelerate the company, but a challenging board can make it tough for companies to keep moving.”

Field added that, as conversations progressed with Vojvodich, she continually delivered value to the team with crisp answers and great insights, noting that her experience translates.

Dec
05
2019
--

Design may be the next entrepreneurial gold rush

Ten years ago, the vast majority of designers were working in Adobe Photoshop, a powerful tool with fine-tuned controls for almost every kind of image manipulation one could imagine. But it was a tool built for an analog world focused on photos, flyers and print magazines; there were no collaborative features, and much more importantly for designers, there were no other options.

Since then, a handful of major players have stepped up to dominate the market alongside the behemoth, including InVision, Sketch, Figma and Canva.

And with the shift in the way designers fit into organizations and the way design fits into business overall, the design ecosystem is following the same path blazed by enterprise SaaS companies in recent years. Undoubtedly, investors are ready to place their bets in design.

But the question still remains over whether the design industry will follow in the footprints of the sales stack — with Salesforce reigning as king and hundreds of much smaller startup subjects serving at its pleasure — or if it will go the way of the marketing stack, where a lively ecosystem of smaller niche players exist under the umbrella of a handful of major, general-use players.

“Deca-billion-dollar SaaS categories aren’t born everyday,” said InVision CEO Clark Valberg . “From my perspective, the majority of investors are still trying to understand the ontology of the space, while remaining sufficiently aware of its current and future economic impact so as to eagerly secure their foothold. The space is new and important enough to create gold-rush momentum, but evolving at a speed to produce the illusion of micro-categorization, which, in many cases, will ultimately fail to pass the test of time and avoid inevitable consolidation.”

I spoke to several notable players in the design space — Sketch CEO Pieter Omvlee, InVision CEO Clark Valberg, Figma CEO Dylan Field, Adobe Product Director Mark Webster, InVision VP and former VP of Design at Twitter Mike Davidson, Sequoia General Partner Andrew Reed and FirstMark Capital General Partner Amish Jani — and asked them what the fierce competition means for the future of the ecosystem.

But let’s first back up.

Past

Sketch launched in 2010, offering the first viable alternative to Photoshop. Made for design and not photo-editing with a specific focus on UI and UX design, Sketch arrived just as the app craze was picking up serious steam.

A year later, InVision landed in the mix. Rather than focus on the tools designers used, it concentrated on the evolution of design within organizations. With designers consolidating from many specialties to overarching positions like product and user experience designers, and with the screen becoming a primary point of contact between every company and its customers, InVision filled the gap of collaboration with its focus on prototypes.

If designs could look and feel like the real thing — without the resources spent by engineering — to allow executives, product leads and others to weigh in, the time it takes to bring a product to market could be cut significantly, and InVision capitalized on this new efficiency.

In 2012, came Canva, a product that focused primarily on non-designers and folks who need to ‘design’ without all the bells and whistles professionals use. The thesis: no matter which department you work in, you still need design, whether it’s for an internal meeting, an external sales deck, or simply a side project you’re working on in your personal time. Canva, like many tech firms these days, has taken its top-of-funnel approach to the enterprise, giving businesses an opportunity to unify non-designers within the org for their various decks and materials.

In 2016, the industry felt two more big shifts. In the first, Adobe woke up, realized it still had to compete and launched Adobe XD, which allowed designers to collaborate amongst themselves and within the organization, not unlike InVision, complete with prototyping capabilities. The second shift was the introduction of a little company called Figma.

Where Sketch innovated on price, focus and usability, and where InVision helped evolve design’s position within an organization, Figma changed the game with straight-up technology. If Github is Google Drive, Figma is Google Docs. Not only does Figma allow organizations to store and share design files, it actually allows multiple designers to work in the same file at one time. Oh, and it’s all on the web.

In 2018, InVision started to move up stream with the launch of Studio, a design tool meant to take on the likes of Adobe and Sketch and, yes, Figma.

Present

When it comes to design tools in 2019, we have an embarrassment of riches, but the success of these players can’t be fully credited to the products themselves.

A shift in the way businesses think about digital presence has been underway since the early 2000s. In the not-too-distant past, not every company had a website and many that did offered a very basic site without much utility.

In short, designers were needed and valued at digital-first businesses and consumer-facing companies moving toward e-commerce, but very early-stage digital products, or incumbents in traditional industries had a free pass to focus on issues other than design. Remember the original MySpace? Here’s what Amazon looked like when it launched.

In the not-too-distant past, the aesthetic bar for internet design was very, very low. That’s no longer the case.

Dec
04
2019
--

GitGuardian raises $12M to help developers write more secure code and ‘fix’ GitHub leaks

Data breaches that could cause millions of dollars in potential damages have been the bane of the life of many a company. What’s required is a great deal of real-time monitoring. The problem is that this world has become incredibly complex. A SANS Institute survey found half of company data breaches were the result of account or credential hacking.

GitGuardian has attempted to address this with a highly developer-centric cybersecurity solution.

It’s now attracted the attention of major investors, to the tune of $12 million in Series A funding, led by Balderton Capital . Scott Chacon, co-founder of GitHub, and Solomon Hykes, founder of Docker, also participated in the round.

The startup plans to use the investment from Balderton Capital to expand its customer base, predominantly in the U.S. Around 75% of its clients are currently based in the U.S., with the remainder being based in Europe, and the funding will continue to drive this expansion.

Built to uncover sensitive company information hiding in online repositories, GitGuardian says its real-time monitoring platform can address the data leaks issues. Modern enterprise software developers have to integrate multiple internal and third-party services. That means they need incredibly sensitive “secrets,” such as login details, API keys and private cryptographic keys used to protect confidential systems and data.

GitGuardian’s systems detect thousands of credential leaks per day. The team originally built its launch platform with public GitHub in mind; however, GitGuardian is built as a private solution to monitor and notify on secrets that are inappropriately disseminated in internal systems as well, such as private code repositories or messaging systems.

Solomon Hykes, founder of Docker and investor at GitGuardian, said: “Securing your systems starts with securing your software development process. GitGuardian understands this, and they have built a pragmatic solution to an acute security problem. Their credentials monitoring system is a must-have for any serious organization.”

Do they have any competitors?

Co-founder Jérémy Thomas told me: “We currently don’t have any direct competitors. This generally means that there’s no market, or the market is too small to be interesting. In our case, our fundraise proves we’ve put our hands on something huge. So the reason we don’t have competitors is because the problem we’re solving is counterintuitive at first sight. Ask any developer, they will say they would never hardcode any secret in public source code. However, humans make mistakes and when that happens, they can be extremely serious: it can take a single leaked credential to jeopardize an entire organization. To conclude, I’d say our real competitors so far are black hat hackers. Black hat activity is real on GitHub. For two years, we’ve been monitoring organized groups of hackers that exchange sensitive information they find on the platform. We are competing with them on speed of detection and scope of vulnerabilities covered.”

Dec
03
2019
--

AWS launches discounted spot capacity for its Fargate container platform

AWS today quietly brought spot capacity to Fargate, its serverless compute engine for containers that supports both the company’s Elastic Container Service and, now, its Elastic Kubernetes service.

Like spot instances for the EC2 compute platform, Fargate Spot pricing is significantly cheaper, both for storage and compute, than regular Fargate pricing. In return, though, you have to be able to accept the fact that your instance may get terminated when AWS needs additional capacity. While that means Fargate Spot may not be perfect for every workload, there are plenty of applications that can easily handle an interruption.

“Fargate now has on-demand, savings plan, spot,” AWS VP of Compute Services Deepak Singh told me. “If you think about Fargate as a compute layer for, as we call it, serverless compute for containers, you now have the pricing worked out and you now have both orchestrators on top of it.”

He also noted that containers already drive a significant percentage of spot usage on AWS in general, so adding this functionality to Fargate makes a lot of sense (and may save users a few dollars here and there). Pricing, of course, is the major draw here, and an hour of CPU time on Fargate Spot will only cost $0.01245364 (yes, AWS is pretty precise there) compared to $0.04048 for the on-demand price,

With this, AWS is also launching another important new feature: capacity providers. The idea here is to automate capacity provisioning for Fargate and EC2, both of which now offer on-demand and spot instances, after all. You simply write a config file that, for example, says you want to run 70% of your capacity on EC2 and the rest on spot instances. The scheduler will then keep that capacity on spot as instances come and go, and if there are no spot instances available, it will move it to on-demand instances and back to spot once instances are available again.

In the future, you will also be able to mix and match EC2 and Fargate. “You can say, I want some of my services running on EC2 on demand, some running on Fargate on demand, and the rest running on Fargate Spot,” Singh explained. “And the scheduler manages it for you. You squint hard, capacity is capacity. We can attach other capacity providers.” Outposts, AWS’ fully managed service for running AWS services in your data center, could be a capacity provider, for example.

These new features and prices will be officially announced in Thursday’s re:Invent keynote, but the documentation and pricing is already live today.

Dec
03
2019
--

OrbitsEdge partners with HPE on orbital data center computing and analytics

What kinds of businesses might be able to operate in space? Well, data centers are one potential target you might not have thought of. Space provides an interesting environment for data center operations, including advanced analytics operations and even artificial intelligence, due in part to the excellent cooling conditions and reasonable access to renewable power supply (solar). But there are challenges, which is why a new partnership between Florida-based space startup OrbitsEdge and Hewlett Packard Enterprises (HPE) makes a lot of sense.

The partnership will make OrbitsEdge a hardware supplier for HPE’s Edgeline Converged Edge Systems, and basically it means that the space startup will be handling everything required to “harden” the standard HPE micro-data center equipment for use in outer space. Hardening is a standard process for getting stuff ready to use in space, and essentially prepares equipment to withstand the increased radiation, extreme temperatures and other stressors that space adds to the mix.

OrbitsEdge, founded earlier this year, has developed a proprietary piece of hardware called the “SatFrame” which is designed to counter the stress of a space-based operating environment, making it relatively easy to take off-the-shelf Earth equipment like the HPE Edgeline system and get it working in space without requiring a huge amount of additional, custom work.

In terms of what this will potentially provide, the partnership will mean it’s more feasible than ever to set up a small-scale data center in orbit to handle at least some of the processing of space-based data right near where it’s collected, rather than having to shuttle it back down to Earth. That process can be expensive, and difficult to source in terms of even finding companies and infrastructure to use. As with in-space manufacturing, doing things locally could save a lot of overhead and unlock tons of potential down the line.

Dec
03
2019
--

Verizon and AWS announce 5G Edge computing partnership

Just as Qualcomm was starting to highlight its 5G plans for the coming years, Verizon CEO Hans Vestberg hit the stage at AWS re:Invent to discuss the carrier’s team up with the cloud computing giant.

As part of Verizon’s (TechCrunch’s parent company, disclosure, disclosure, disclosure) upcoming focus on 5G edge computing, the carrier will be the first to use the newly announced AWS Wavelength. The platform is designed to let developers build super-low-latency apps for 5G devices.

Currently, it’s being piloted in Chicago with a handful of high-profile partners, including the NFL and Bethesda, the game developer behind Fallout and Elder Scrolls. No details yet on those specific applications (though remote gaming and live streaming seem like the obvious ones), but potential future uses include things like smart cars, IoT devices, AR/VR — you know, the sorts of things people cite when discussing 5G’s life beyond the smartphone.

“AWS Wavelength provides the same AWS environment — APIs, management console and tools — that they’re using today at the edge of the 5G network,” AWS CEO Andy Jassy said onstage. Starting with Verizon’s 5G network locations in the U.S., customers will be able to deploy the latency-sensitive portions of an application at the edge to provide single-digit millisecond latency to mobile and connected devices.”

As Verizon’s CEO joined Vestberg onstage, CNO Nicki Palmer joined Qualcomm in Hawaii to discuss the carrier’s mmwave approach to the next-gen wireless. The technology has raised some questions around its coverage area. Verizon has addressed this to some degree with partnerships with third-parties like Boingo.

The company plans to have coverage in 30 U.S. cities by end of year. That number is currently at 18.

Dec
03
2019
--

AWS announces new enterprise search tool powered by machine learning

Today at AWS re:Invent in Las Vegas, the company announced a new search tool called Kendra, which provides natural language search across a variety of content repositories using machine learning.

Matt Wood, AWS VP of artificial intelligence, said the new search tool uses machine learning, but doesn’t actually require machine learning expertise of any kind. Amazon is taking care of that for customers under the hood.

You start by identifying your content repositories. This could be anything from an S3 storage repository to OneDrive to Salesforce — anywhere you store content. You can use pre-built connectors from AWS, provide your credentials and connect to all of these different tools.

Kendra then builds an index based on the content it finds in the connected repositories, and users can begin to interact with the search tool using natural language queries. The tool understands concepts like time, so if the question is something like “When is the IT Help Desk is open,” the search engine understands that this is about time, checks the index and delivers the right information to the user.

The beauty of this search tool is not only that it uses machine learning, but based on simple feedback from a user, like a smiley face or sad face emoji, it can learn which answers are good and which ones require improvement, and it does this automatically for the search team.

Once you have it set up, you can drop the search on your company intranet or you can use it internally inside an application and it behaves as you would expect a search tool to do, with features like type ahead.

Dec
03
2019
--

AWS’ CodeGuru uses machine learning to automate code reviews

AWS today announced CodeGuru, a new machine learning-based service that automates code reviews based on the data the company has gathered from doing code reviews internally.

Developers write the code and simply add CodeGuru to the pull requests. It supports GitHub and CodeCommit, for the time being. CodeGuru uses its knowledge of reviews from Amazon and about 10,000 open-source projects to find issues, then comments on the pull request as needed. It will obviously identify the issues, but it also will suggest remediations and offer links to the relevant documentation.

Encoded in CodeGuru are AWS’s own best practices. Among other things, it also finds concurrency issues, incorrect handling of resources and issues with input validation.

AWS and Amazon’s consumer side have used the profiler part of CodeGuru for the last few years to find the “most expensive line of code.” Over the last few years, even as some of the company’s applications grew, some teams were able to increase their CPU utilization by more than 325% at 36% lower cost.

Dec
03
2019
--

AWS AutoPilot gives you more visible AutoML in SageMaker Studio

Today at AWS re:Invent in Las Vegas, the company announced AutoPilot, a new tool that gives you greater visibility into automated machine learning model creation, known as AutoML. This new tool is part of the new SageMaker Studio also announced today.

As AWS CEO Andy Jassy pointed out onstage today, one of the problems with AutoML is that it’s basically a black box. If you want to improve a mediocre model, or just evolve it for your business, you have no idea how it was built.

The idea behind AutoPilot is to give you the ease of model creation you get from an AutoML-generated model, but also give you much deeper insight into how the system built the model. “AutoPilot is a way to create a model automatically, but give you full visibility and control,” Jassy said.

“Using a single API call, or a few clicks in Amazon SageMaker Studio, SageMaker Autopilot first inspects your data set, and runs a number of candidates to figure out the optimal combination of data preprocessing steps, machine learning algorithms and hyperparameters. Then, it uses this combination to train an Inference Pipeline, which you can easily deploy either on a real-time endpoint or for batch processing. As usual with Amazon SageMaker, all of this takes place on fully-managed infrastructure,” the company explained in a blog post announcing the new feature.

You can look at the model’s parameters, and see 50 automated models, and it provides you with a leader board of what models performed the best. What’s more, you can look at the model’s underlying notebook, and also see what trade-offs were made to generate that best model. For instance, it may be the most accurate, but sacrifices speed to get that.

Your company may have its own set of unique requirements and you can choose the best model based on whatever parameters you consider to be most important, even though it was generated in an automated fashion.

Once you have the model you like best, you can go into SageMaker Studio, select it and launch it with a single click. The tool is available now.

 

Dec
03
2019
--

AWS speeds up Redshift queries 10x with AQUA

At its re:Invent conference, AWS CEO Andy Jassy today announced the launch of AQUA (the Advanced Query Accelerator) for Amazon Redshift, the company’s data warehousing service. As Jassy noted in his keynote, it’s hard to scale data warehouses when you want to do analytics over that data. At some point, as your data warehouse or lake grows, the data starts overwhelming your network or available compute, even with today’s highspeed networks and chips. So to handle this, AQUA is essentially a hardware-accelerated cache and promises up to 10x better query performance than competing cloud-based data warehouses.

“Think about how much data you have to move over the network to get to your compute,” Jassy said. And if that’s not a problem for a company today, he added, it will likely become one soon, given how much data most enterprises now generate.

With this, Jassy explained, you’re bringing the compute power you need directly to the storage layer. The cache sits on top of Amazon’s standard S3 service and can hence scale out as needed across as many nodes as needed.

AWS designed its own analytics processors to power this service and accelerate the data compression and encryption on the fly.

Unsurprisingly, the service is also 100% compatible with the current version of Redshift.

In addition, AWS also today announced next-generation compute instances for Redshift, the RA3 instances, with 48 vCPUs and 384GiB of memory and up to 64 TB of storage. You can build clusters of these with up to 128 instances.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com