Apr
06
2021
--

Esri brings its flagship ArcGIS platform to Kubernetes

Esri, the geographic information system (GIS), mapping and spatial analytics company, is hosting its (virtual) developer summit today. Unsurprisingly, it is making a couple of major announcements at the event that range from a new design system and improved JavaScript APIs to support for running ArcGIS Enterprise in containers on Kubernetes.

The Kubernetes project was a major undertaking for the company, Esri Product Managers Trevor Seaton and Philip Heede told me. Traditionally, like so many similar products, ArcGIS was architected to be installed on physical boxes, virtual machines or cloud-hosted VMs. And while it doesn’t really matter to end-users where the software runs, containerizing the application means that it is far easier for businesses to scale their systems up or down as needed.

Esri ArcGIS Enterprise on Kubernetes deployment

Esri ArcGIS Enterprise on Kubernetes deployment. Image Credits: Esri

“We have a lot of customers — especially some of the larger customers — that run very complex questions,” Seaton explained. “And sometimes it’s unpredictable. They might be responding to seasonal events or business events or economic events, and they need to understand not only what’s going on in the world, but also respond to their many users from outside the organization coming in and asking questions of the systems that they put in place using ArcGIS. And that unpredictable demand is one of the key benefits of Kubernetes.”

Deploying Esri ArcGIS Enterprise on Kubernetes

Deploying Esri ArcGIS Enterprise on Kubernetes. Image Credits: Esri

The team could have chosen to go the easy route and put a wrapper around its existing tools to containerize them and call it a day, but as Seaton noted, Esri used this opportunity to re-architect its tools and break it down into microservices.

“It’s taken us a while because we took three or four big applications that together make up [ArcGIS] Enterprise,” he said. “And we broke those apart into a much larger set of microservices. That allows us to containerize specific services and add a lot of high availability and resilience to the system without adding a lot of complexity for the administrators — in fact, we’re reducing the complexity as we do that and all of that gets installed in one single deployment script.”

While Kubernetes simplifies a lot of the management experience, a lot of companies that use ArcGIS aren’t yet familiar with it. And as Seaton and Heede noted, the company isn’t forcing anyone onto this platform. It will continue to support Windows and Linux just like before. Heede also stressed that it’s still unusual — especially in this industry — to see a complex, fully integrated system like ArcGIS being delivered in the form of microservices and multiple containers that its customers then run on their own infrastructure.

Image Credits: Esri

In addition to the Kubernetes announcement, Esri also today announced new JavaScript APIs that make it easier for developers to create applications that bring together Esri’s server-side technology and the scalability of doing much of the analysis on the client-side. Back in the day, Esri would support tools like Microsoft’s Silverlight and Adobe/Apache Flex for building rich web-based applications. “Now, we’re really focusing on a single web development technology and the toolset around that,” Esri product manager Julie Powell told me.

A bit later this month, Esri also plans to launch its new design system to make it easier and faster for developers to create clean and consistent user interfaces. This design system will launch April 22, but the company already provided a bit of a teaser today. As Powell noted, the challenge for Esri is that its design system has to help the company’s partners put their own style and branding on top of the maps and data they get from the ArcGIS ecosystem.

 

Apr
06
2021
--

Google Cloud joins the FinOps Foundation

Google Cloud today announced that it is joining the FinOps Foundation as a Premier Member.

The FinOps Foundation is a relatively new open-source foundation, hosted by the Linux Foundation, that launched last year. It aims to bring together companies in the “cloud financial management” space to establish best practices and standards. As the term implies, “cloud financial management” is about the tools and practices that help businesses manage and budget their cloud spend. There’s a reason, after all, that there are a number of successful startups that do nothing else but help businesses optimize their cloud spend (and ideally lower it).

Maybe it’s no surprise that the FinOps Foundation was born out of Cloudability’s quarterly Customer Advisory Board meetings. Until now, CloudHealth by VMware was the Foundation’s only Premiere Member among its vendor members. Other members include Cloudability, Densify, Kubecost and SoftwareOne. With Google Cloud, the Foundation has now signed up its first major cloud provider.

“FinOps best practices are essential for companies to monitor, analyze and optimize cloud spend across tens to hundreds of projects that are critical to their business success,” said Yanbing Li, vice president of Engineering and Product at Google Cloud. “More visibility, efficiency and tools will enable our customers to improve their cloud deployments and drive greater business value. We are excited to join FinOps Foundation, and together with like-minded organizations, we will shepherd behavioral change throughout the industry.”

Google Cloud has already committed to sending members to some of the Foundation’s various Special Interest Groups (SIGs) and Working Groups to “help drive open-source standards for cloud financial management.”

“The practitioners in the FinOps Foundation greatly benefit when market leaders like Google Cloud invest resources and align their product offerings to FinOps principles and standards,” said J.R. Storment, executive director of the FinOps Foundation. “We are thrilled to see Google Cloud increase its commitment to the FinOps Foundation, joining VMware as the second of three dedicated Premier Member Technical Advisory Council seats.”

Mar
18
2021
--

Slapdash raises $3.7M seed to ship a workplace apps command bar

The explosion in productivity software amid a broader remote work boom has been one of the pandemic’s clearest tech impacts. But learning to use a dozen new programs while having to decipher which data is hosted where can sometimes seem to have an adverse effect on worker productivity. It’s all time that users can take for granted, even when carrying out common tasks like navigating to the calendar to view more info to click a link to open the browser to redirect to the native app to open a Zoom call.

Slapdash is aiming to carve a new niche out for itself among workplace software tools, pushing a desire for peak performance to the forefront with a product that shaves seconds off each instance where a user needs to find data hosted in a cloud app or carry out an action. While most of the integration-heavy software suites to emerge during the remote work boom have focused on promoting visibility or re-skinning workflows across the tangled weave of SaaS apps, Slapdash founder Ivan Kanevski hopes that the company’s efforts to engineer a quicker path to information will push tech workers to integrate another tool into their workflow.

The team tells TechCrunch that they’ve raised $3.7 million in seed funding from investors that include S28 Capital, Quiet Capital, Quarry Ventures and Twenty Two Ventures. Angels participating in the round include co-founders at companies like Patreon, Docker and Zynga.

Image Credits: Slapdash

Kanevski says the team sought to emulate the success of popular apps like Superhuman, which have pushed low-latency command line interface navigation while emulating some of the sleek internal tools used at companies like Facebook, where he spent nearly six years as a software engineer.

Slapdash’s command line widget can be pulled up anywhere, once installed, with a quick keyboard shortcut. From there, users can search through a laundry list of indexable apps including Slack, Zoom, Jira and about 20 others. Beyond command line access, users can create folders of files and actions inside the full desktop app or create their own keyboard shortcuts to quickly hammer out a task. The app is available on Mac, Windows, Linux and the web.

“We’re not trying to displace the applications that you connect to Slapdash,” he says. “You won’t see us, for example, building document editing, you won’t see us building project management, just because our sort of philosophy is that we’re a neutral platform.”

The company offers a free tier for users indexing up to five apps and creating 10 commands and spaces; any more than that and you level up into a $12 per month paid plan. Things look more customized for enterprise-wide pricing. As the team hopes to make the tool essential to startups, Kanevski sees the app’s hefty utility for individual users as a clear asset in scaling up.

“If you anticipate rolling this out to larger organizations, you would want the people that are using the software to have a blast with it,” he says. “We have quite a lot of confidence that even at this sort of individual atomic level, we built something pretty joyful and helpful.”

Feb
15
2021
--

Bare Systemd Method to Create an XFS Mount

Bare Systemd Method to Create an XFS Mount

Bare Systemd Method to Create an XFS MountFor MongoDB data directories only XFS is recommended. The ext4 filesystem isn’t so bad but when there are a very, very high number of random accesses (which WiredTiger can reach) it can hit a bottleneck. To be fair most deployments will never hit this bottleneck, but it does remain an official production recommendation of MongoDB to only use XFS, and you get annoying warnings until you do.

On a fresh cloud server instance for your MongoDB hosts, it would be helpful if they always booted up with a flexibly-attached XFS mount for a MongoDB data directory. Your cloud service possibly isn’t making this easy though. E.g. you can get a fresh, network-attached block device on demand with each new virtual server instance but there is no “xfs” option available in that template configuration.

If you script or configure something at the cloud service API level (eg. launch using AWS CLI scripts in AWS EC2, or use cloud-init for a multi-vendor way) this is achievable. But let’s assume you have some one-time testing, or something like that, where the time investment for a cloud service script/recipe won’t pay off.

The Ideal Method – Which Doesn’t Work Yet

Ideally, you would create a system mount unit, specify the filesystem type, and systemd would take care of formatting a freshly-attached block device if the filesystem wasn’t initialized yet.

But this is not supported so far (systemd github issue #10014). I’ve not been able to make the ‘x-systemd.makefs’ expansion feature added around systemd-fstab-generator work in AWS Linux 2 instances either.

I assume you’re in the same situation if you’ve landed here.

Bare systemd Units to Do it All

One systemd unit is required for each of these steps: mkfs.xfs, mount, and chown mongod:mongod. The key points are:

  • Require the mkfs.xfs command to be run when block device is loaded by systemd, and do so before target level “local-fs.target” (or “local-fs-pre.target”)
  • Making a mount type service unit to mount the block device at the desired directory (eg. /data)
  • After the mount unit is up, run the chown command

The following example assumes:

  • “mongod” user already exists. (Create manually, or get it incidentally as a MongoDB package is installed.)
  • /dev/xvdb is the device path.
  • /data is the path it will be mounted at.

/etc/systemd/system/mkfs.xfs_xvdb.service

[Unit]
Description=oneshot systemd service to XFS format /dev/xvdb device
After=dev-xvdb.device
Requires=dev-xvdb.device

[Service]
Type=oneshot
#Note the leading "-" in ExecStart. In systemd exec directives this means ignore non-zero exit code.
#systemd init will continue peacefully this way, even if mkfs.xfs error-exits in subsequent restarts because the block device was formatted already.
ExecStart=-/usr/sbin/mkfs.xfs /dev/xvdb

[Install]
WantedBy=local-fs.target

Enable with:

sudo systemctl enable mkfs.xfs_xvdb.service

 

/etc/systemd/system/data.mount

(!) Don’t forget to first create the /data directory in your server image’s root filesystem to be the mount point for the data.mount unit.

[Unit]
Description=systemd unit to mount /dev/xvdb at /data
After=mkfs.xfs_xvdb.service
Requires=mkfs.xfs_xvdb.service

[Mount]
What=/dev/xvdb
#N.b. "Where" must be reflected in the unit name.
#Eg. if it is for path "/data" we must name this unit file "data.mount".
#Substitute "-" in place of non-root "/" path delimiters. Eg. /srv/xyz --> "srv-xyz.mount"
Where=/data
Type=xfs

[Install]
WantedBy=multi-user.target

Enable with:

sudo systemctl enable data.mount

 

/etc/systemd/system/set_mongodb_data_dir_owner.service

[Unit]
Description=oneshot systemd service to chown mongod:mongod /data
After=data.mount
Requires=data.mount

[Service]
Type=oneshot
#Using -v (verbose) to produce message that can be seen in the systemd journal. This is optional.
ExecStart=/usr/bin/chown -v mongod:mongod /data

[Install]
WantedBy=multi-user.target

Enable with:

sudo systemctl enable set_mongodb_data_dir_owner.service

  

As you’re building a server image at this stage you don’t have to start the units above – just enable, then save the server image. Yes of course it should be tested, but the real goal is making it work in new server instances. So, confirm these systemd units are automatically executed after the startup of those.

 

Have open source expertise you want to share? Submit your talk for Percona Live ONLINE 2021!

 

The Output in systemd Journal

When a new server instance is started, the journal messages for these three units should look something like this:

mkfs.xfs_xvdb.service

~]$ journalctl -u mkfs.xfs_xvdb.service
<timestamp> <hostname> systemd[1]: Starting Formats /dev/xvdb device with XFS filesystem...
<timestamp> <hostname> mkfs.xfs[2494]: meta-data=/dev/xvdb              isize=512    agcount=4, agsize=524288 blks
<timestamp> <hostname> mkfs.xfs[2494]: =                       sectsz=512   attr=2, projid32bit=1
<timestamp> <hostname> mkfs.xfs[2494]: =                       crc=1        finobt=1, sparse=0
...
...
<timestamp> <hostname> mkfs.xfs[2494]: realtime =none                   extsz=4096   blocks=0, rtextents=0
<timestamp> <hostname> systemd[1]: Started Formats /dev/xvdb device with XFS filesystem.

Or, if after Reboot:

<timestamp> <hostname> systemd[1]: Starting Formats /dev/xvdb device with XFS filesystem...
<timestamp> <hostname> mkfs.xfs[2497]: mkfs.xfs: /dev/xvdb contains a mounted filesystem
<timestamp> <hostname> mkfs.xfs[2497]: Usage: mkfs.xfs
<timestamp> <hostname> mkfs.xfs[2497]: /* blocksize */                [-b log=n|size=num]
...
...
<timestamp> <hostname> mkfs.xfs[2497]: xxxm (xxx MiB), xxxg (xxx GiB), xxxt (xxx TiB) or xxxp (xxx PiB).
<timestamp> <hostname> mkfs.xfs[2497]: <value> is xxx (512 byte blocks).
<timestamp> <hostname> systemd[1]: Started Formats /dev/xvdb device with XFS filesystem.

Note that mkfs.xfs failing and being ignored in the second or later restarts is planned and expected given the way this systemd service unit was written.

data.mount

~]$ journalctl -u data.mount
<timestamp> <hostname> systemd[1]: Mounting Mount block device xvdb at /data...
<timestamp> <hostname> systemd[1]: Mounted Mount block device xvdb at /data.

set_mongodb_data_dir_owner.service

~]$ journalctl -u set_mongodb_data_dir_owner.service
<timestamp> <hostname> systemd[1]: Starting Ensures mongod is owner of mounted XFS directory at /data...
<timestamp> <hostname> chown[2549]: changed ownership of ‘/data’ from root:root to mongod:mongod
<timestamp> <hostname> systemd[1]: Started Ensures mongod is owner of mounted XFS directory at /data

Or, if after Reboot:

<timestamp> <hostname> systemd[1]: Starting oneshot systemd service to chown mongod:mongod /data...
<timestamp> <hostname> chown[2549]: ownership of ‘/data’ retained as mongod:mongod
<timestamp> <hostname> systemd[1]: Started oneshot systemd service to chown mongod:mongod /data.

The Wrap-Up

systemd unit types and activation rules are tightly coupled with core Linux. You can use them to do the right thing, at the right time.

A server setup job that can be reduced to single commands such as /usr/bin/mkdir, /usr/sbin/mkfs*, /usr/bin/chown etc. is an opportunity for you to implement a minimalist systemd config project.

Scripts with systemd are fine too – make them the command that is run by ExecStart=… – but that’s a different feeling to being able to see everything with just “systemctl cat <unit_name>” and “systemctl status”.

Typically systemd units will be run every bootup, not just the first one. A command such as mkfs.xfs should be only run once, however, so a trick is needed. This example relied on the fact that mkfs.xfs will not damage an existing filesystem (without -f force at least). Putting “-” at the start of /usr/sbin/mkfs.xfs is how the ‘filesystem already exists’ exit code is ignored.

Nov
20
2020
--

Webinar December 9: How to Measure Linux Performance Wrong

How to Measure Linux Performance Wrong

How to Measure Linux Performance WrongDon’t miss out! Join Peter Zaitsev, Percona CEO, as he discusses Linux Performance measurement.

In this webinar, Peter will look at typical mistakes measuring or interpreting Linux Performance. He’ll discuss whether you should use LoadAvg to assess if your CPU is overloaded or Disk Utilization to see if your disks are overloaded. In addition, he’ll delve into a number of other metrics that are often misunderstood and/or misused. He’ll close the webinar with suggestions for better ways to measure Linux Performance.

Please join Peter Zaitsev, Percona CEO, on Wednesday, December 9 at 11:30 AM EST for his webinar “How to Measure Linux Performance Wrong”.

Register for Webinar

If you can’t attend, sign up anyway and we’ll send you the slides and recording afterward.

Nov
12
2020
--

Mirantis brings extensions to its Lens Kubernetes IDE, launches a new Kubernetes distro

Earlier this year, Mirantis, the company that now owns Docker’s enterprise business, acquired Lens, a desktop application that provides developers with something akin to an IDE for managing their Kubernetes clusters. At the time, Mirantis CEO Adrian Ionel told me that the company wants to offer enterprises the tools to quickly build modern applications. Today, it’s taking another step in that direction with the launch of an extensions API for Lens that will take the tool far beyond its original capabilities.

In addition to this update to Lens, Mirantis also today announced a new open-source project: k0s. The company describes it as “a modern, 100% upstream vanilla Kubernetes distro that is designed and packaged without compromise.”

It’s a single optimized binary without any OS dependencies (besides the kernel). Based on upstream Kubernetes, k0s supports Intel and Arm architectures and can run on any Linux host or Windows Server 2019 worker nodes. Given these requirements, the team argues that k0s should work for virtually any use case, ranging from local development clusters to private data centers, telco clusters and hybrid cloud solutions.

“We wanted to create a modern, robust and versatile base layer for various use cases where Kubernetes is in play. Something that leverages vanilla upstream Kubernetes and is versatile enough to cover use cases ranging from typical cloud based deployments to various edge/IoT type of cases,” said Jussi Nummelin, senior principal engineer at Mirantis and founder of k0s. “Leveraging our previous experiences, we really did not want to start maintaining the setup and packaging for various OS distros. Hence the packaging model of a single binary to allow us to focus more on the core problem rather than different flavors of packaging such as debs, rpms and what-nots.”

Mirantis, of course, has a bit of experience in the distro game. In its earliest iteration, back in 2013, the company offered one of the first major OpenStack distributions, after all.

Image Credits: Mirantis

As for Lens, the new API, which will go live next week to coincide with KubeCon, will enable developers to extend the service with support for other Kubernetes-integrated components and services.

“Extensions API will unlock collaboration with technology vendors and transform Lens into a fully featured cloud native development IDE that we can extend and enhance without limits,” said Miska Kaipiainen, the co-founder of the Lens open-source project and senior director of engineering at Mirantis. “If you are a vendor, Lens will provide the best channel to reach tens of thousands of active Kubernetes developers and gain distribution to your technology in a way that did not exist before. At the same time, the users of Lens enjoy quality features, technologies and integrations easier than ever.”

The company has already lined up a number of popular CNCF projects and vendors in the cloud-native ecosystem to build integrations. These include Kubernetes security vendors Aqua and Carbonetes, API gateway maker Ambassador Labs and AIOps company Carbon Relay. Venafi, nCipher, Tigera, Kong and StackRox are also currently working on their extensions.

“Introducing an extensions API to Lens is a game-changer for Kubernetes operators and developers, because it will foster an ecosystem of cloud-native tools that can be used in context with the full power of Kubernetes controls, at the user’s fingertips,” said Viswajith Venugopal, StackRox software engineer and developer of KubeLinter. “We look forward to integrating KubeLinter with Lens for a more seamless user experience.”

Nov
10
2020
--

With $29M in funding, Isovalent launches its cloud-native networking and security platform

Isovalent, a startup that aims to bring networking into the cloud-native era, today announced that it has raised a $29 million Series A round led by Andreessen Horowitz and Google. In addition, the company today officially launched its Cilium Enterprise platform (which was in stealth until now) to help enterprises connect, observe and secure their applications.

The open-source Cilium project is already seeing growing adoption, with Google choosing it for its new GKE dataplane, for example. Other users include Adobe, Capital One, Datadog and GitLab. Isovalent is following what is now the standard model for commercializing open-source projects by launching an enterprise version.

Image Credits: Cilium

The founding team of CEO Dan Wendlandt and CTO Thomas Graf has deep experience in working on the Linux kernel and building networking products. Graf spent 15 years working on the Linux kernel and created the Cilium open-source project, while Wendlandt worked on Open vSwitch at Nicira (and then VMware).

Image Credits: Isovalent

“We saw that first wave of network intelligence be moved into software, but I think we both shared the view that the first wave was about replicating the traditional network devices in software,” Wendlandt told me. “You had IPs, you still had ports, you created virtual routers, and this and that. We both had that shared vision that the next step was to go beyond what the hardware did in software — and now, in software, you can do so much more. Thomas, with his deep insight in the Linux kernel, really saw this eBPF technology as something that was just obviously going to be groundbreaking technology, in terms of where we could take Linux networking and security.”

As Graf told me, when Docker, Kubernetes and containers, in general, become popular, what he saw was that networking companies at first were simply trying to reapply what they had already done for virtualization. “Let’s just treat containers as many as miniature VMs. That was incredibly wrong,” he said. “So we looked around, and we saw eBPF and said: this is just out there and it is perfect, how can we shape it forward?”

And while Isovalent’s focus is on cloud-native networking, the added benefit of how it uses the eBPF Linux kernel technology is that it also gains deep insights into how data flows between services and hence allows it to add advanced security features as well.

As the team noted, though, users definitely don’t need to understand or program eBPF, which is essentially the next generation of Linux kernel modules, themselves.

Image Credits: Isovalent

“I have spent my entire career in this space, and the North Star has always been to go beyond IPs + ports and build networking visibility and security at a layer that is aligned with how developers, operations and security think about their applications and data,” said Martin Casado, partner at Andreesen Horowitz (and the founder of Nicira). “Until just recently, the technology did not exist. All of that changed with Kubernetes and eBPF.  Dan and Thomas have put together the best team in the industry and given the traction around Cilium, they are well on their way to upending the world of networking yet again.”

As more companies adopt Kubernetes, they are now reaching a stage where they have the basics down but are now facing the next set of problems that come with this transition. Those, almost by default, include figuring out how to isolate workloads and get visibility into their networks — all areas where Isovalent/Cilium can help.

The team tells me its focus, now that the product is out of stealth, is about building out its go-to-market efforts and, of course, continue to build out its platform.

Oct
09
2020
--

How Roblox completely transformed its tech stack

Picture yourself in the role of CIO at Roblox in 2017.

At that point, the gaming platform and publishing system that launched in 2005 was growing fast, but its underlying technology was aging, consisting of a single data center in Chicago and a bunch of third-party partners, including AWS, all running bare metal (nonvirtualized) servers. At a time when users have precious little patience for outages, your uptime was just two nines, or less than 99% (five nines is considered optimal).

Unbelievably, Roblox was popular in spite of this, but the company’s leadership knew it couldn’t continue with performance like that, especially as it was rapidly gaining in popularity. The company needed to call in the technology cavalry, which is essentially what it did when it hired Dan Williams in 2017.

Williams has a history of solving these kinds of intractable infrastructure issues, with a background that includes a gig at Facebook between 2007 and 2011, where he worked on the technology to help the young social network scale to millions of users. Later, he worked at Dropbox, where he helped build a new internal network, leading the company’s move away from AWS, a major undertaking involving moving more than 500 petabytes of data.

When Roblox approached him in mid-2017, he jumped at the chance to take on another major infrastructure challenge. While they are still in the midst of the transition to a new modern tech stack today, we sat down with Williams to learn how he put the company on the road to a cloud-native, microservices-focused system with its own network of worldwide edge data centers.

Scoping the problem

Sep
23
2020
--

Webinar October 7: eBPFTrace – DTrace Replacement on Linux

eBPFTrace

eBPFTraceJoin Peter Zaitsev, Percona CEO, as he discusses eBPFTrace and DTrace on Linux.

While eBPF was included in Linux kernel for quite a few years, it lacked a good “front end” to complete Dtrace functionality in the Linux ecosystem. In this presentation, Peter will look into eBPFTrace as a capable Dtrace replacement. He will also demonstrate how you can develop your own tools and utilities using eBPFTrace as well as how you can use the tools from eBPFTrace collection to get great insights into Linux Operations.

Please join Peter Zaitsev on Wednesday, October 7, 2020, at 11:30 am EDT for his webinar “eBPFTrace – DTrace Replacement on Linux“.

Register for Webinar

If you can’t attend, sign up anyway and we’ll send you the slides and recording afterward.

Sep
22
2020
--

Microsoft’s Edge browser is coming to Linux in October

Microsoft’s Edge browser is coming to Linux, starting with the Dev channel. The first of these previews will go live in October.

When Microsoft announced that it would switch its Edge browser to the Chromium engine, it vowed to bring it to every popular platform. At the time, Linux wasn’t part of that list, but by late last year, it became clear that Microsoft was indeed working on a Linux version. Later, at this year’s Build, a Microsoft presenter even used it during a presentation.

Image Credits: Microsoft

Starting in October, Linux users will be able to either download the browser from the Edge Insider website or through their native package managers. Linux users will get the same Edge experience as users on Windows and macOS, as well as access to its built-in privacy and security features. For the most part, I would expect the Linux experience to be on par with that on the other platforms.

Microsoft also today announced that its developers have made more than 3,700 commits to the Chromium project so far. Some of this work has been on support for touchscreens, but the team also contributed to areas like accessibility features and developer tools, on top of core browser fundamentals.

Currently, Microsoft Edge is available on Windows 7, 8 and 10, as well as macOS, iOS and Android.

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com