Jul
26
2018
--

GitHub and Google reaffirm partnership with Cloud Build CI/CD tool integration

When Microsoft acquired GitHub for $7.5 billion smackeroos in June, it sent some shock waves through the developer community as it is a key code repository. Google certainly took notice, but the two companies continue to work closely together. Today at Google Next, they announced an expansion of their partnership around Google’s new CI/CD tool, Cloud Build, which was unveiled this week at the conference.

Politics aside, the purpose of the integration is to make life easier for developers by reducing the need to switch between tools. If GitHub recognizes a Docker file without a corresponding CI/CD tool, the developer will be prompted to grab one from the GitHub Marketplace with Google Cloud Build offered prominently as one of the suggested tools.

Photo: GitHub

Should the developer choose to install Cloud Build, that’s where the tight integration comes into play. Developers can run Cloud Build against their code directly from GitHub, and the results will appear directly in the GitHub interface. They won’t have to switch applications to make this work together, and that should go a long way toward saving developer time and effort.

Google Cloud Build. Photo: Google

This is part of GitHub’s new “Smart Recommendations,” which will be rolling out to users in the coming months.

Melody Meckfessel, VP of Engineering for Google Cloud says that the two companies have a history and a context and they have always worked extremely well together on an engineer-to-engineer level. “We have been working together from an engineering standpoint for so many years. We both believe in doing the right thing for developers. We believe that success as it relates to cloud adoption comes from collaborating in the ecosystem,” she said.

Given that close relationship, it had to be disappointing on some level when Microsoft acquired GitHub. In fact, Google Cloud head, Diane Greene expressed sadness about the deal in an interview with CNBC earlier this week, but GitHub’s SVP of Technology Jason Warner believes that Microsoft will be a good steward and that the relationship with Google will remain strong.

Warner says the company’s founding principles were about not getting locked in to any particularly platform and he doesn’t see that changing after the acquisition is finalized. “One of the things that was critical in any discussion about an acquisition was that GitHub shall remain an open platform,” Warner explained.

He indicated that today’s announcement is just a starting point, and the two companies intend to build on this integration moving forward. “We worked pretty closely on this together. This announcement is a nod to some of the future oriented partnerships that we will be announcing later in the year,” he said. And that partnership should continue unabated, even after the Microsoft acquisition is finalized later this year.

Jul
12
2018
--

GitHub Enterprise and Business Cloud users now get access to public repos, too

GitHub, the code hosting service Microsoft recently acquired, is launching a couple of new features for its business users today that’ll make it easier for them to access public repositories on the service.

Traditionally, users on the hosted Business Cloud and self-hosted Enterprise were not able to directly access the millions of public open-source repositories on the service. Now, with the service’s release, that’s changing, and business users will be able to reach beyond their firewalls to engage and collaborate with the rest of the GitHub community directly.

With this, GitHub now also offers its business and enterprise users a new unified search feature that lets them tap into their internal repos but also look at open-source ones.

Other new features in this latest Enterprise release include the ability to ignore whitespace when reviewing changes, the ability to require multiple reviewers for code changes, automated support tickets and more. You can find a full list of all updates here.

Microsoft’s acquisition of GitHub wasn’t fully unexpected (and it’s worth noting that the acquisition hasn’t closed yet), but it is still controversial, given that Microsoft and the open-source community, which heavily relies on GitHub, haven’t always seen eye-to-eye in the past. I’m personally not too worried about that, and it feels like the dust has settled at this point and that people are waiting to see what Microsoft will do with the service.

Jun
22
2018
--

Security, privacy experts weigh in on the ICE doxxing

In what appears to be the latest salvo in a new, wired form of protest, developer Sam Lavigne posted code that scrapes LinkedIn to find Immigration and Customs Enforcement employee accounts. His code, which basically a Python-based tool that scans LinkedIn for keywords, is gone from Github and Gitlab and Medium took down his original post. The CSV of the data is still available here and here and WikiLeaks has posted a mirror.

“I find it helpful to remember that as much as internet companies use data to spy on and exploit their users, we can at times reverse the story, and leverage those very same online platforms as a means to investigate or even undermine entrenched power structures. It’s a strange side effect of our reliance on private companies and semi-public platforms to mediate nearly all aspects of our lives. We don’t necessarily need to wait for the next Snowden-style revelation to scrutinize the powerful — so much is already hiding in plain sight,” said Lavigne.

Doxxing is the process of using publicly available information to target someone online for abuse. Because we can now find out anything on anyone for a few dollars – a search for “background check” brings up dozens of paid services that can get you names and addresses in a second – scraping public data on LinkedIn seems far easier and innocuous. That doesn’t make it legal.

“Recent efforts to outlaw doxxing at the national level (like the Online Safety Modernization Act of 2017) have stalled in committee, so it’s not strictly illegal,” said James Slaby, Security Expert at Acronis. “But LinkedIn and other social networks usually consider it a violation of their terms of service to scrape their data for personal use. The question of fairness is trickier: doxxing is often justified as a rare tool that the powerless can use against the powerful to call attention to perceived injustices.”

“The problem is that doxxing is a crude tool. The torrent of online ridicule, abuse and threats that can be heaped on doxxed targets by their political or ideological opponents can also rain down on unintended and undeserving targets: family members, friends, people with similar names or appearances,” he said.

The tool itself isn’t to blame. No one would fault a job seeker or salesperson who scraped LinkedIn for targeted employees of a specific company. That said, scraping and publicly shaming employees walks a thin line.

“In my opinion, the professor who developed this scraper tool isn’t breaking the law, as it’s perfectly legal to search the web for publicly available information,” said David Kennedy, CEO of TrustedSec. “This is known in the security space as ‘open source intelligence’ collection, and scrapers are just one way to do it. That said, it is concerning to see ICE agents doxxed in this way. I understand emotions are running high on both sides of this debate, but we don’t want to increase the physical security risks to our law enforcement officers.”

“The decision by Twitter, Github and Medium to block the dissemination of this information and tracking tool makes sense – in fact, law enforcement agents’ personal information is often protected. This isn’t going to go away anytime soon, it’s only going to become more aggressive, particularly as more people grow comfortable with using the darknet and the many available hacking tools for sale in these underground forums. Law enforcement agents need to take note of this, and be much more careful about what (and how often) they post online.”

Ultimately, doxxing is problematic. Because we place our information on public forums there should be nothing to stop anyone from finding and posting it. However, the expectation that people will use our information for good and not evil is swiftly eroding. Today, wrote one security researcher, David Kavanaugh, doxxing is becoming dangerous.

“Going after the people on the ground is like shooting the messenger. Decisions are made by leadership and those are the people we should be going after. Doxxing is akin to a personal attack. Change policy, don’t ruin more lives,” he said.

Jun
04
2018
--

Microsoft promises to keep GitHub independent and open

Microsoft today announced its plans to acquire GitHub for $7.5 billion in stock. Unsurprisingly, that sent a few shock waves through the developer community, which still often eyes Microsoft with considerable unease. During a conference call this morning, Microsoft CEO Satya Nadella, incoming GitHub CEO (and Xamarin founder) Nat Friedman and GitHub co-founder and outgoing CEO Chris Wanstrath laid out the plans for GitHub’s future under Microsoft.

The core message everybody on today’s call stressed was that GitHub will continue to operate as an independent company. That’s very much the approach Microsoft took with its acquisition of LinkedIn, but to some degree, it’s also an admission that Microsoft is aware of its reputation among many of the developers who call GitHub their home. GitHub will remain an open platform that any developer can plug into and extend, Microsoft promises. It’ll support any cloud and any device.

Unsurprisingly, while the core of GitHub won’t change, Microsoft does plan to extend GitHub’s enterprise services and integrate them with its own sales and partner channels. And Nadella noted that the company will use GitHub to bring Microsoft’s developer tools and services “to new audiences.”

With Nat Friedman taking over as CEO, GitHub will have a respected technologist at the helm. Microsoft’s acquisition and integration of Xamarin has, at least from the outside, been a success (and Friedman himself always seems very happy about the outcome when I talk to him), so I think this bodes quite well for GitHub. After joining Microsoft, Friedman ran the developer services team at the company. Wanstrath, who only took over the CEO role again after its last CEO was ousted after harassment scandal at the company, had long said that he wanted to step down and take a more active product role. And that’s what’s happening now that Friedman is taking over. Wanstrath will become a technical fellow and work on “strategic software initiatives” at Microsoft.

Indeed, during an interview after the acquisition was announced, Friedman repeatedly noted that he thinks GitHub is the most important developer company today — and it turns out that he started advocating for a closer relationship between the two companies right after he joined Microsoft two years ago.

During today’s press call, Friedman also stressed Microsoft’s commitment to keeping GitHub as open as it is today — but he also plans to expand the service and its community. “We want to bring more developers and more capabilities to GitHub, he said. “Because as a network and as a group of people in a community, GitHub is stronger, the bigger it is.”

Friedman echoed that in our interview later in the day and noted that he expected the developer community to be skeptical of the mashup of these two companies. “There is always healthy skepticism in the developer community,” he told me. “I would ask developers to look at the last few years of Microsoft history and really honestly Microsoft’s transformation into an open source company.” He asked developers to judge Microsoft by that and noted that what really matters, of course, is that the company will follow through on the promises it made today.

As for the product itself, Friedman noted that everything GitHub does should be about making a developer’s life easier. And to get started, that’ll mean making developing in the cloud easier. “We think broadly about the new and compelling types of ways that we can integrate cloud services into GitHub,” he noted. “And this doesn’t just apply to our cloud. GitHub is an open platform. So we have the ability for anyone to plug their cloud services into GitHub, and make it easier for you to go from code to cloud. And it extends beyond the cloud as well. Code to cloud. code to mobile, code to edge device, code to IoT. Every workflow that a developer wants to pursue, we will support.”

Another area the company will work on is the GitHub Marketplace. Microsoft says that it will offer all of its developer tools and services in the GitHub Marketplace.

And unsurprisingly, VS Code, Microsoft’s free and open source code editor, will get deeply integrated GitHub support.

“Our vision is really all about empowering developers and creating a home where you can use any language, any operating system, any cloud, any device for every developer, whether your student, a hobbyist, a large company, a startup or anything in between. GitHub is the home for all developers,” said Friedman. In our interview, he also stressed that his focus will be on making “GitHub better at making GitHub” and that he plans to do so by bringing Microsoft’s resources and infrastructure to the code hosting service, while at the same time leaving it to operate independently. 

It’s unclear whether all of these commitments today will easy developers’ fears of losing GitHub as a relatively neutral third-party in the ecosystem.

Nadella, who is surely aware of this, addressed this directly today. “We recognize the responsibility we take on with this agreement,” he said. “We are committed to being stewards of the GitHub community, which will retain its developer-first ethos operate independently and remain an open platform. We will always listen to develop a feedback and invest in both fundamentals as well as new capability once the acquisition closes.

In his prepared remarks, Nadella also stressed Microsoft’s heritage as a developer-centric company and that is it already the most active organization on GitHub. But more importantly, he addressed Microsoft’s role in the open source community, too. “We have always loved developers, and we love open source developers,” he said. “We’ve been on a journey ourselves with open source and the open source community. Today, we are all in with open source. We are active in the open source ecosystem. We contribute to open source project and some of our most vibrant developer tools and frameworks are open-sourced when it comes to our commitment to all source judges, by the actions we have taken in the recent past our actions today and in the future.”

May
21
2018
--

Uizard raises funds for its AI that turns design mockups into source code

When you’re trying to build apps, there is a very tedious point where you have to stare at a wireframe and then laboriously turn it into code. Actually, the process itself is highly repetitive and ought to be much easier. The traditional software development from front-end design to front-end html/css development to working code is expensive, time-consuming, tedious and repetitive.

But most approaches to solving this problem have been more complex than they need to be. What if you could just turn wireframes straight into code and then devote your time to the more complex aspects of a build?

That’s the idea behind a Copenhagen-based startup called Uizard.

Uizard’s computer vision and AI platform claims to be able to automatically turn design mockups — and this could be on the back of napkin — into source code that developers can plug into their backend code.

It’s now raised an $800,000 pre-seed round led by New York-based LDV Capital with co-investors ByFounders, The Nordic Web Ventures, 7percent Ventures, New York Venture Partners, entrepreneur Peter Stern (co-founder of Datek) and Philipp Moehring and Andy Chung from AngelList . This fundraising will be used to grow the team and launch the beta product.

The company received interest in June 2017 when they released their first research milestone dubbed “pix2code” and implementation on GitHub was the second-mosttrending project of June 2017 ahead of Facebook Prepack and Google TensorFlow.

Mar
22
2018
--

GitLab adds support for GitHub

Here is an interesting twist: GitLab, which in many ways competes with GitHub as a shared code repository service for teams, is bringing its continuous integration and delivery (CI/CD) features to GitHub.

The new service is launching today as part of GitLab’s hosted service. It will remain free to developers until March 22, 2019. After that, it’s moving to GitLab.com’s paid Silver tier.

GitHub itself offers some basic project and task management services on top of its core tools, but for the most part, it leaves the rest of the DevOps lifecycle to partners. GitLab offers a more complete CI/CD solution with integrated code repositories, but while GitLab has grown in popularity, GitHub is surely better known among developers and businesses. With this move, GitLab hopes to gain new users — and especially enterprise users — who are currently storing their code on GitHub but are looking for a CI/CD solution.

The new GitHub integration allows developers to set up their projects in GitLab and connect them to a GitHub repository. So whenever developers push code to their GitHub repository, GitLab will kick off that project’s CI/CD pipeline with automated builds, tests and deployments.

“Continuous integration and deployment form the backbone of modern DevOps,” said Sid Sijbrandij, CEO and co-founder of GitLab. “With this new offering, businesses and open source projects that use GitHub as a code repository will have access to GitLab’s industry leading CI/CD capabilities.”

It’s worth noting that GitLab offers a very similar integration with Atlassian’s BitBucket, too.

Feb
22
2018
--

Percona Live 2018 Featured Talk – Scaling a High-Traffic Database: Moving Tables Across Clusters with Bryana Knight

Percona Live 2018 Featured Talk

Percona Live 2018 Featured TalkWelcome to the first interview blog for the upcoming Percona Live 2018. Each post in this series highlights a Percona Live 2018 featured talk that will be at the conference and gives a short preview of what attendees can expect to learn from the presenter.

This blog post highlights Bryana Knight, Platform Engineer at GitHub. Her talk is titled Scaling a High-Traffic Database: Moving Tables Across Clusters. Facing an immediate need to distribute load, GitHub came up with creative ways to move a significant amount of traffic off of their main MySQL cluster – with no user impact. In our conversation, we discussed how Bryana and GitHub solved some of these issues:

Percona: Who are you, and how did you get into databases? What was your path to your current responsibilities?

Bryana: I started at GitHub as a full-stack engineer working on a new business offering, and was then shortly offered the opportunity to transition to the database services team. Our priorities back then included reviewing every single database migration for GItHub.com. Having spent my whole career as a full-stack engineer, I had to level-up pretty quickly on MySQL, data modeling, data access patterns – basically everything databases. I spent the first few months learning our schema and setup through lots of reading, mentorship from other members of my team, reviewing migrations for most of our tables, and asking a million questions.

Originally, my team spent a lot of time addressing immediate performance concerns. Then we started partnering with product engineering teams to build out the backends for new features. Now we are focused on the longterm scalability and availability of our database, stemming from how we access it. I work right between our DBA’s and our product and API engineers.

Percona: Your talk is titled “Scaling a High-Traffic Database: Moving Tables Across Clusters”. What were the challenges GitHub faced that required redistributing your tables?

Bryana GitHubBryana: This biggest part of the GitHub codebase is an 8-year-old monolith. As a company, we’ve been fortunate enough to see a huge amount of user growth since the company started. User growth means data growth. The schema and setup that worked for GitHub early on, and very much allowed GitHub to get to where it is today with tons of features and an extremely robust API, is not necessarily the right schema and setup for the size GitHub is today. 

We were seeing that higher than “normal” load was starting to have a more noticeable effect. The monolith aspect of our database, organic growth, plus inefficiencies in our code base were putting a lot of pressure on the master of our primary database cluster, which held our most core tables (think users, repos, permissions). From the database perspective, this meant contention, locking, and replica lag. From the user’s perspective, this meant anything from longer page loads to delays in UI updates and notifications, to timeouts. 

Percona: What were some of the other options you looked at (if any)?

Bryana: Moving tables out of our main cluster was not the only action we took to alleviate some of the pressure in our database. However, it was the highest impact change we could make in the medium-term to give us the breathing room we needed and improve performance and availability. We also prioritized efforts around moving more reads to replicas and off the master, throttling more writes where possible, index improvements and query optimizations. Moving these tables gave us the opportunity to start thinking more long-term about how we can store and access our data differently to allow us to scale horizontally while maintaining our healthy pace of feature development.

Percona: What were the issues that needed to be worked out between the different teams you mention in your description? How did they impact the project?

Bryana: Moving tables out of our main database required collaboration between multiple teams. The team I’m on, database-services, was responsible for coming up with the strategy to move tables without user impact, writing the code to handle query isolation and routing, connection switching, backgrounding writes, and so on. Our database-infrastructure team determined where the tables we were moving should go (new cluster or existing), setup the clusters, and advised us on how to safely copy the data. In some cases, we were able to use MySQL replication. When that wasn’t possible, they weighed in on other options. 

We worked with production engineers to isolate data access to these tables and safely split JOINs with other tables. Everybody needed to be sure we weren’t affecting performance and user experience when doing this. We discussed with our support team the risk of what we were doing. Then we worked with them to determine if we should preemptively status yellow when there was a higher risk of user impact. During the actual cut-overs, representatives from all these groups would get on a war-room-like video call and “push the button”, and we always made sure to have a roll-out and roll-back plan. 

Percona: Why should people attend your talk? What do you hope people will take away from it?

Bryana: In terms of database performance, there are a lot of little things you can do immediately to try and make improvements: things like adding indexes, tweaking queries, and denormalizing data. There are also more drastic, architectural changes you can pursue, that many companies need to do when they get to certain scale. The topic of this talk is a valid strategy that fits between these two extremes. It relieved some ongoing performance problems and availability risk, while giving us some breathing room to think long term. I think other applications and databases might be in a similar situation and this could work for them. 

Percona: What are you looking forward to at Percona Live (besides your talk)?

This is actually the first time I’m attending a Percona Live conference. I’m hoping to learn from some of the talks around scaling a high traffic database and sharding. I’m also looking forward to seeing some talks from the wonderful folks on GitHub database-infrastructure team.

Want to find out more about this Percona Live 2018 featured talk, and Bryana and GitHub’s migration? Register for Percona Live 2018, and see her talk Scaling a High-Traffic Database: Moving Tables Across Clusters. Register now to get the best price!

Percona Live Open Source Database Conference 2018 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference will be April 23-25, 2018 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

Feb
06
2018
--

Announcing Experimental Percona Monitoring and Management (PMM) Functionality via Percona Labs

Experimental Percona Monitoring and Management

Experimental Percona Monitoring and ManagementIn this blog post, we’ll introduce how you can look at some experimental Percona Monitoring and Management (PMM) features using Percona Labs builds on GitHub.

Note: PerconaLabs and Percona-QA are open source GitHub repositories for unofficial scripts and tools created by Percona staff. While not covered by Percona support or services agreements, these handy utilities can help you save time and effort.

Percona software builds located in the PerconaLabs and Percona-QA repositories are not officially released software, and also aren’t covered by Percona support or services agreements. 

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

This month we’re announcing access to Percona Labs builds of Percona Monitoring and Management so that you can experiment with new functionality that’s not yet in our mainline product. You can identify the unique builds at:

https://hub.docker.com/r/perconalab/pmm-server/tags/

Most of the entries here are the pre-release candidate images we use for QA, and they follow a format of all integers (for example “201802061627”). You’re fine to use these images, but they aren’t the ones that have the experimental functionality.

Today we have two builds of note (these DO have the experimental functionality):

  • 1.6.0-prom2.1
  • 1.5.3-prometheus2

We’re highlighting Prometheus 2.1 on top of our January 1.6 release (1.6.0-prom2.1), available in Docker format. Some of the reasons you might want to deploy this experimental build to take advantage of the Prometheus 2 benefits are:

  • Reduced CPU usage by Prometheus, meaning you can add more hosts to your PMM Server
  • Performance improvements, meaning dashboards load faster
  • Reduced disk I/O, disk space usage

Please keep in mind that as this is a Percona Labs build (see our note above), so in addition note the following two criteria:

  • Support is available from our Percona Monitoring and Management Forums
  • Upgrades might not work – don’t count on upgrading out of this version to a newer release (although it’s not guaranteed to block upgrades)

How to Deploy an Experimental Build from Percona Labs

The great news is that you can follow our Deployment Instructions for Docker, and the only change is where you specify a different Docker container to pull. For example, the standard way to deploy the latest stable PMM Server release with Docker is:

docker pull percona/pmm-server:latest

To use the Percona Labs build 1.6.0-prom2.1 with Prometheus 2.1, execute the following:

docker pull perconalab/pmm-server:1.6.0-prom2.1

Please share your feedback on this build on our Percona Monitoring and Management Forums.

If you’re looking to deploy Percona’s officially released PMM Server (not the Percona Labs release, but our mainline version which currently is release 1.7) into a production environment, I encourage you to consider a Percona Support contract, which includes PMM at no additional charge!

Jul
11
2017
--

Abstract launches as the versioning system of record for design

 Sales teams have Salesforce. Engineers have GitHub. But designers have always had slim pickings. Abstract, launching today, is a workflow platform and system of record built for designers to solve the debilitating frustrations of the design process. The company is targeting Sketch users out of the gate, with ambitions to accommodate the whole gamut of visual file types. Read More

May
02
2017
--

Facebook’s fastText library is now optimized for mobile

 This morning Facebook’s AI Research (FAIR) lab released an update to fastText, its super-speedy open-source text classification library. When it was initially released, fastText shipped with pre-trained word vectors for 90 languages, but today it’s getting a boost to 294 languages. The release also brings enhancements to reduce model size and ultimately memory demand. Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com