Jun
04
2018
--

Microsoft promises to keep GitHub independent and open

Microsoft today announced its plans to acquire GitHub for $7.5 billion in stock. Unsurprisingly, that sent a few shock waves through the developer community, which still often eyes Microsoft with considerable unease. During a conference call this morning, Microsoft CEO Satya Nadella, incoming GitHub CEO (and Xamarin founder) Nat Friedman and GitHub co-founder and outgoing CEO Chris Wanstrath laid out the plans for GitHub’s future under Microsoft.

The core message everybody on today’s call stressed was that GitHub will continue to operate as an independent company. That’s very much the approach Microsoft took with its acquisition of LinkedIn, but to some degree, it’s also an admission that Microsoft is aware of its reputation among many of the developers who call GitHub their home. GitHub will remain an open platform that any developer can plug into and extend, Microsoft promises. It’ll support any cloud and any device.

Unsurprisingly, while the core of GitHub won’t change, Microsoft does plan to extend GitHub’s enterprise services and integrate them with its own sales and partner channels. And Nadella noted that the company will use GitHub to bring Microsoft’s developer tools and services “to new audiences.”

With Nat Friedman taking over as CEO, GitHub will have a respected technologist at the helm. Microsoft’s acquisition and integration of Xamarin has, at least from the outside, been a success (and Friedman himself always seems very happy about the outcome when I talk to him), so I think this bodes quite well for GitHub. After joining Microsoft, Friedman ran the developer services team at the company. Wanstrath, who only took over the CEO role again after its last CEO was ousted after harassment scandal at the company, had long said that he wanted to step down and take a more active product role. And that’s what’s happening now that Friedman is taking over. Wanstrath will become a technical fellow and work on “strategic software initiatives” at Microsoft.

Indeed, during an interview after the acquisition was announced, Friedman repeatedly noted that he thinks GitHub is the most important developer company today — and it turns out that he started advocating for a closer relationship between the two companies right after he joined Microsoft two years ago.

During today’s press call, Friedman also stressed Microsoft’s commitment to keeping GitHub as open as it is today — but he also plans to expand the service and its community. “We want to bring more developers and more capabilities to GitHub, he said. “Because as a network and as a group of people in a community, GitHub is stronger, the bigger it is.”

Friedman echoed that in our interview later in the day and noted that he expected the developer community to be skeptical of the mashup of these two companies. “There is always healthy skepticism in the developer community,” he told me. “I would ask developers to look at the last few years of Microsoft history and really honestly Microsoft’s transformation into an open source company.” He asked developers to judge Microsoft by that and noted that what really matters, of course, is that the company will follow through on the promises it made today.

As for the product itself, Friedman noted that everything GitHub does should be about making a developer’s life easier. And to get started, that’ll mean making developing in the cloud easier. “We think broadly about the new and compelling types of ways that we can integrate cloud services into GitHub,” he noted. “And this doesn’t just apply to our cloud. GitHub is an open platform. So we have the ability for anyone to plug their cloud services into GitHub, and make it easier for you to go from code to cloud. And it extends beyond the cloud as well. Code to cloud. code to mobile, code to edge device, code to IoT. Every workflow that a developer wants to pursue, we will support.”

Another area the company will work on is the GitHub Marketplace. Microsoft says that it will offer all of its developer tools and services in the GitHub Marketplace.

And unsurprisingly, VS Code, Microsoft’s free and open source code editor, will get deeply integrated GitHub support.

“Our vision is really all about empowering developers and creating a home where you can use any language, any operating system, any cloud, any device for every developer, whether your student, a hobbyist, a large company, a startup or anything in between. GitHub is the home for all developers,” said Friedman. In our interview, he also stressed that his focus will be on making “GitHub better at making GitHub” and that he plans to do so by bringing Microsoft’s resources and infrastructure to the code hosting service, while at the same time leaving it to operate independently. 

It’s unclear whether all of these commitments today will easy developers’ fears of losing GitHub as a relatively neutral third-party in the ecosystem.

Nadella, who is surely aware of this, addressed this directly today. “We recognize the responsibility we take on with this agreement,” he said. “We are committed to being stewards of the GitHub community, which will retain its developer-first ethos operate independently and remain an open platform. We will always listen to develop a feedback and invest in both fundamentals as well as new capability once the acquisition closes.

In his prepared remarks, Nadella also stressed Microsoft’s heritage as a developer-centric company and that is it already the most active organization on GitHub. But more importantly, he addressed Microsoft’s role in the open source community, too. “We have always loved developers, and we love open source developers,” he said. “We’ve been on a journey ourselves with open source and the open source community. Today, we are all in with open source. We are active in the open source ecosystem. We contribute to open source project and some of our most vibrant developer tools and frameworks are open-sourced when it comes to our commitment to all source judges, by the actions we have taken in the recent past our actions today and in the future.”

May
21
2018
--

Uizard raises funds for its AI that turns design mockups into source code

When you’re trying to build apps, there is a very tedious point where you have to stare at a wireframe and then laboriously turn it into code. Actually, the process itself is highly repetitive and ought to be much easier. The traditional software development from front-end design to front-end html/css development to working code is expensive, time-consuming, tedious and repetitive.

But most approaches to solving this problem have been more complex than they need to be. What if you could just turn wireframes straight into code and then devote your time to the more complex aspects of a build?

That’s the idea behind a Copenhagen-based startup called Uizard.

Uizard’s computer vision and AI platform claims to be able to automatically turn design mockups — and this could be on the back of napkin — into source code that developers can plug into their backend code.

It’s now raised an $800,000 pre-seed round led by New York-based LDV Capital with co-investors ByFounders, The Nordic Web Ventures, 7percent Ventures, New York Venture Partners, entrepreneur Peter Stern (co-founder of Datek) and Philipp Moehring and Andy Chung from AngelList . This fundraising will be used to grow the team and launch the beta product.

The company received interest in June 2017 when they released their first research milestone dubbed “pix2code” and implementation on GitHub was the second-mosttrending project of June 2017 ahead of Facebook Prepack and Google TensorFlow.

Mar
22
2018
--

GitLab adds support for GitHub

Here is an interesting twist: GitLab, which in many ways competes with GitHub as a shared code repository service for teams, is bringing its continuous integration and delivery (CI/CD) features to GitHub.

The new service is launching today as part of GitLab’s hosted service. It will remain free to developers until March 22, 2019. After that, it’s moving to GitLab.com’s paid Silver tier.

GitHub itself offers some basic project and task management services on top of its core tools, but for the most part, it leaves the rest of the DevOps lifecycle to partners. GitLab offers a more complete CI/CD solution with integrated code repositories, but while GitLab has grown in popularity, GitHub is surely better known among developers and businesses. With this move, GitLab hopes to gain new users — and especially enterprise users — who are currently storing their code on GitHub but are looking for a CI/CD solution.

The new GitHub integration allows developers to set up their projects in GitLab and connect them to a GitHub repository. So whenever developers push code to their GitHub repository, GitLab will kick off that project’s CI/CD pipeline with automated builds, tests and deployments.

“Continuous integration and deployment form the backbone of modern DevOps,” said Sid Sijbrandij, CEO and co-founder of GitLab. “With this new offering, businesses and open source projects that use GitHub as a code repository will have access to GitLab’s industry leading CI/CD capabilities.”

It’s worth noting that GitLab offers a very similar integration with Atlassian’s BitBucket, too.

Feb
22
2018
--

Percona Live 2018 Featured Talk – Scaling a High-Traffic Database: Moving Tables Across Clusters with Bryana Knight

Percona Live 2018 Featured Talk

Percona Live 2018 Featured TalkWelcome to the first interview blog for the upcoming Percona Live 2018. Each post in this series highlights a Percona Live 2018 featured talk that will be at the conference and gives a short preview of what attendees can expect to learn from the presenter.

This blog post highlights Bryana Knight, Platform Engineer at GitHub. Her talk is titled Scaling a High-Traffic Database: Moving Tables Across Clusters. Facing an immediate need to distribute load, GitHub came up with creative ways to move a significant amount of traffic off of their main MySQL cluster – with no user impact. In our conversation, we discussed how Bryana and GitHub solved some of these issues:

Percona: Who are you, and how did you get into databases? What was your path to your current responsibilities?

Bryana: I started at GitHub as a full-stack engineer working on a new business offering, and was then shortly offered the opportunity to transition to the database services team. Our priorities back then included reviewing every single database migration for GItHub.com. Having spent my whole career as a full-stack engineer, I had to level-up pretty quickly on MySQL, data modeling, data access patterns – basically everything databases. I spent the first few months learning our schema and setup through lots of reading, mentorship from other members of my team, reviewing migrations for most of our tables, and asking a million questions.

Originally, my team spent a lot of time addressing immediate performance concerns. Then we started partnering with product engineering teams to build out the backends for new features. Now we are focused on the longterm scalability and availability of our database, stemming from how we access it. I work right between our DBA’s and our product and API engineers.

Percona: Your talk is titled “Scaling a High-Traffic Database: Moving Tables Across Clusters”. What were the challenges GitHub faced that required redistributing your tables?

Bryana GitHubBryana: This biggest part of the GitHub codebase is an 8-year-old monolith. As a company, we’ve been fortunate enough to see a huge amount of user growth since the company started. User growth means data growth. The schema and setup that worked for GitHub early on, and very much allowed GitHub to get to where it is today with tons of features and an extremely robust API, is not necessarily the right schema and setup for the size GitHub is today. 

We were seeing that higher than “normal” load was starting to have a more noticeable effect. The monolith aspect of our database, organic growth, plus inefficiencies in our code base were putting a lot of pressure on the master of our primary database cluster, which held our most core tables (think users, repos, permissions). From the database perspective, this meant contention, locking, and replica lag. From the user’s perspective, this meant anything from longer page loads to delays in UI updates and notifications, to timeouts. 

Percona: What were some of the other options you looked at (if any)?

Bryana: Moving tables out of our main cluster was not the only action we took to alleviate some of the pressure in our database. However, it was the highest impact change we could make in the medium-term to give us the breathing room we needed and improve performance and availability. We also prioritized efforts around moving more reads to replicas and off the master, throttling more writes where possible, index improvements and query optimizations. Moving these tables gave us the opportunity to start thinking more long-term about how we can store and access our data differently to allow us to scale horizontally while maintaining our healthy pace of feature development.

Percona: What were the issues that needed to be worked out between the different teams you mention in your description? How did they impact the project?

Bryana: Moving tables out of our main database required collaboration between multiple teams. The team I’m on, database-services, was responsible for coming up with the strategy to move tables without user impact, writing the code to handle query isolation and routing, connection switching, backgrounding writes, and so on. Our database-infrastructure team determined where the tables we were moving should go (new cluster or existing), setup the clusters, and advised us on how to safely copy the data. In some cases, we were able to use MySQL replication. When that wasn’t possible, they weighed in on other options. 

We worked with production engineers to isolate data access to these tables and safely split JOINs with other tables. Everybody needed to be sure we weren’t affecting performance and user experience when doing this. We discussed with our support team the risk of what we were doing. Then we worked with them to determine if we should preemptively status yellow when there was a higher risk of user impact. During the actual cut-overs, representatives from all these groups would get on a war-room-like video call and “push the button”, and we always made sure to have a roll-out and roll-back plan. 

Percona: Why should people attend your talk? What do you hope people will take away from it?

Bryana: In terms of database performance, there are a lot of little things you can do immediately to try and make improvements: things like adding indexes, tweaking queries, and denormalizing data. There are also more drastic, architectural changes you can pursue, that many companies need to do when they get to certain scale. The topic of this talk is a valid strategy that fits between these two extremes. It relieved some ongoing performance problems and availability risk, while giving us some breathing room to think long term. I think other applications and databases might be in a similar situation and this could work for them. 

Percona: What are you looking forward to at Percona Live (besides your talk)?

This is actually the first time I’m attending a Percona Live conference. I’m hoping to learn from some of the talks around scaling a high traffic database and sharding. I’m also looking forward to seeing some talks from the wonderful folks on GitHub database-infrastructure team.

Want to find out more about this Percona Live 2018 featured talk, and Bryana and GitHub’s migration? Register for Percona Live 2018, and see her talk Scaling a High-Traffic Database: Moving Tables Across Clusters. Register now to get the best price!

Percona Live Open Source Database Conference 2018 is the premier open source event for the data performance ecosystem. It is the place to be for the open source community. Attendees include DBAs, sysadmins, developers, architects, CTOs, CEOs, and vendors from around the world.

The Percona Live Open Source Database Conference will be April 23-25, 2018 at the Hyatt Regency Santa Clara & The Santa Clara Convention Center.

Feb
06
2018
--

Announcing Experimental Percona Monitoring and Management (PMM) Functionality via Percona Labs

Experimental Percona Monitoring and Management

Experimental Percona Monitoring and ManagementIn this blog post, we’ll introduce how you can look at some experimental Percona Monitoring and Management (PMM) features using Percona Labs builds on GitHub.

Note: PerconaLabs and Percona-QA are open source GitHub repositories for unofficial scripts and tools created by Percona staff. While not covered by Percona support or services agreements, these handy utilities can help you save time and effort.

Percona software builds located in the PerconaLabs and Percona-QA repositories are not officially released software, and also aren’t covered by Percona support or services agreements. 

Percona Monitoring and Management (PMM) is a free and open-source platform for managing and monitoring MySQL® and MongoDB® performance. You can run PMM in your environment for maximum security and reliability. It provides thorough time-based analysis for MySQL and MongoDB servers to ensure that your data works as efficiently as possible.

This month we’re announcing access to Percona Labs builds of Percona Monitoring and Management so that you can experiment with new functionality that’s not yet in our mainline product. You can identify the unique builds at:

https://hub.docker.com/r/perconalab/pmm-server/tags/

Most of the entries here are the pre-release candidate images we use for QA, and they follow a format of all integers (for example “201802061627”). You’re fine to use these images, but they aren’t the ones that have the experimental functionality.

Today we have two builds of note (these DO have the experimental functionality):

  • 1.6.0-prom2.1
  • 1.5.3-prometheus2

We’re highlighting Prometheus 2.1 on top of our January 1.6 release (1.6.0-prom2.1), available in Docker format. Some of the reasons you might want to deploy this experimental build to take advantage of the Prometheus 2 benefits are:

  • Reduced CPU usage by Prometheus, meaning you can add more hosts to your PMM Server
  • Performance improvements, meaning dashboards load faster
  • Reduced disk I/O, disk space usage

Please keep in mind that as this is a Percona Labs build (see our note above), so in addition note the following two criteria:

  • Support is available from our Percona Monitoring and Management Forums
  • Upgrades might not work – don’t count on upgrading out of this version to a newer release (although it’s not guaranteed to block upgrades)

How to Deploy an Experimental Build from Percona Labs

The great news is that you can follow our Deployment Instructions for Docker, and the only change is where you specify a different Docker container to pull. For example, the standard way to deploy the latest stable PMM Server release with Docker is:

docker pull percona/pmm-server:latest

To use the Percona Labs build 1.6.0-prom2.1 with Prometheus 2.1, execute the following:

docker pull perconalab/pmm-server:1.6.0-prom2.1

Please share your feedback on this build on our Percona Monitoring and Management Forums.

If you’re looking to deploy Percona’s officially released PMM Server (not the Percona Labs release, but our mainline version which currently is release 1.7) into a production environment, I encourage you to consider a Percona Support contract, which includes PMM at no additional charge!

Jul
11
2017
--

Abstract launches as the versioning system of record for design

 Sales teams have Salesforce. Engineers have GitHub. But designers have always had slim pickings. Abstract, launching today, is a workflow platform and system of record built for designers to solve the debilitating frustrations of the design process. The company is targeting Sketch users out of the gate, with ambitions to accommodate the whole gamut of visual file types. Read More

May
02
2017
--

Facebook’s fastText library is now optimized for mobile

 This morning Facebook’s AI Research (FAIR) lab released an update to fastText, its super-speedy open-source text classification library. When it was initially released, fastText shipped with pre-trained word vectors for 90 languages, but today it’s getting a boost to 294 languages. The release also brings enhancements to reduce model size and ultimately memory demand. Read More

Jan
20
2017
--

How to Manually Build Percona Server for MySQL RPM Packages

RPM PackagesIn this blog, we’ll look at how to manually build Percona Server for MySQL RPM packages.

Several customers and other people from the open source community have asked us how they could make their own Percona Server for MySQL RPM binaries from scratch.

This request is often made by companies that want to add custom patches to our release. To do this, you need to make some modifications to the

percona-server.spec

 file in the source tree, and some preparation is necessary.

This post covers how you can make your own RPMs from GIT or source tarball so that you can build RPMs from your own modified branch, or by applying patches. In this example, we’ll build Percona Server 5.7.16-10.

Making your own RPMs is not a recommended practice, and should rarely be necessary.

Prepare the Source

Using GIT Repository

We can fetch percona/percona-server from GitHub (or your own fork). As we build Percona Server 5.7.16-10, we create a new branch based on the tag of that version:

$ git clone https://github.com/percona/percona-server.git
Cloning into 'percona-server'...
remote: Counting objects: 1216597, done.
remote: Compressing objects: 100% (4/4), done.
remote: Total 1216597 (delta 1), reused 1 (delta 1), pack-reused 1216592
Receiving objects: 100% (1216597/1216597), 510.50 MiB | 5.94 MiB/s, done.
Resolving deltas: 100% (997331/997331), done.
Checking out files: 100% (28925/28925), done.
$ cd percona-server/
$ git checkout -b build-Percona-Server-5.7.16-10 Percona-Server-5.7.16-10
Switched to a new branch 'build-Percona-Server-5.7.16-10'
$ git submodule init
Submodule 'Percona-TokuBackup' (https://github.com/percona/Percona-TokuBackup.git) registered for path 'plugin/tokudb-backup-plugin/Percona-TokuBackup'
Submodule 'storage/rocksdb/rocksdb' (https://github.com/facebook/rocksdb.git) registered for path 'storage/rocksdb/rocksdb'
Submodule 'PerconaFT' (https://github.com/percona/PerconaFT.git) registered for path 'storage/tokudb/PerconaFT'
$ git submodule update
Cloning into 'plugin/tokudb-backup-plugin/Percona-TokuBackup'...
...
Submodule path 'plugin/tokudb-backup-plugin/Percona-TokuBackup': checked out '1b9bb16ad74588601d8fefe46c74cc1dac1dd1d5'
Cloning into 'storage/rocksdb/rocksdb'...
...
Submodule path 'storage/rocksdb/rocksdb': checked out '6a17b07ca856e573fabd6345d70787d4e481f57b'
Cloning into 'storage/tokudb/PerconaFT'...
...
Submodule path 'storage/tokudb/PerconaFT': checked out '0c1b53909bc62a4d9b420116ec8833c78c0c6e8e'

Downloading Source Tarball

An alternative way is to download the source tarball, which you can find at https://www.percona.com/downloads/Percona-Server-5.7/Percona-Server-5.7.16-10/source/.

Extract the source tarball, as the RPM spec file is located there:

$ wget https://www.percona.com/downloads/Percona-Server-5.7/Percona-Server-5.7.16-10/source/tarball/percona-server-5.7.16-10.tar.gz
$ tar -xzvf percona-server-5.7.16-10.tar.gz
$ cd percona-server-5.7.16-10

Making Changes with Patch Files

If you need to make any changes to the source code, you can either use your own GitHub fork or you can apply patches. If you use the former, then you can skip this section.

Why Patches?

Why would we want to use patch files? Because you won’t need to maintain your own fork. You can just build the RPM with the Percona Server source tarball and the patch file.

Create Patch Files

If you do not want to use your own fork in GitHub, you can also create some patch files and modify the RPM spec file to include them.

  1. Create your changes to the source files
  2. Use
    diff

     to create the

    patch

     file:

    $ diff -ru FILE-orig FILE >| ~/custom.patch

Add Patch to RPM Spec File

In order for the patch to be applied during the building of the RPM, edit the 

./build-ps/percona-server.spec

 file and add the two lines that are highlighted:

...
Source91:       filter-requires.sh
Patch0:         mysql-5.7-sharedlib-rename.patch
Patch1:         custom.patch
BuildRequires:  cmake >= 2.8.2
...
%prep
%setup -q -T -a 0 -a 10 -c -n %{src_dir}
pushd %{src_dir}
%patch0 -p1
%patch1 -p1
%build
...

Note that you have to number the patches, in this case I gave it the name

patch1

.

Creating New Source Tarball

If you use your own GitHub fork, or you made manual changes to the source (and you’re not using patch files), you should use that to create your own source tarball.

First, change the Percona Server version number. In this case, we are naming it 

10custom

 to indicate this is not a standard build. You can adapt as you wish, just make sure the 

VERSION

 file looks something like this:

$ cat VERSION
MYSQL_VERSION_MAJOR=5
MYSQL_VERSION_MINOR=7
MYSQL_VERSION_PATCH=16
MYSQL_VERSION_EXTRA=-10custom

Then make the source tarball:

$ cmake . -DDOWNLOAD_BOOST=1 -DWITH_BOOST=build-ps/boost
$ make dist
...
-- Source package ~/percona-server/percona-server-5.7.16-10custom.tar.gz created
Built target dist

Now you have the tarball in your source directory, but we won’t use it yet. We need to add some TokuDB submodules to it first. The make dist also kept the uncompressed directory, which we will use to create the tarball again when the TokuDB parts are included:

$ rm percona-server-5.7.16-10custom.tar.gz
$ cp -R storage/tokudb/PerconaFT/*
    percona-server-5.7.16-10custom/storage/tokudb/PerconaFT/
$ cp -R plugin/tokudb-backup-plugin/Percona-TokuBackup/*
    percona-server-5.7.16-10custom/plugin/tokudb-backup-plugin/Percona-TokuBackup/
$ tar --owner=0 --group=0 --exclude=.git
   -czf percona-server-5.7.16-10custom.tar.gz
   percona-server-5.7.16-10custom

Preparing Build Environment

Environment Requirements

Make sure the build host has at least 10GB of free space and at least 4GB of RAM, or the build will fail at some point.

Install Dependencies

To build the RPM, we need to prepare our build environment and ensure the necessary build dependencies are installed:

$ sudo yum install epel-release
$ sudo yum install git gcc gcc-c++ openssl check cmake
                   bison boost-devel asio-devel libaio-devel
                   ncurses-devel readline-devel pam-devel
                   wget perl-Env time numactl-devel rpmdevtools
                   rpm-build

Prepare RPM Build Tree

Next we need to prepare our build directory structure, in this case we will install it in

~/rpmbuild

:

$ cd ~/
$ rpmdev-setuptree

Download Boost Source

We also need to download the boost source (http://sourceforge.net/projects/boost/files/boost/1.59.0/boost_1_59_0.tar.gz) and put it in the 

~/rpmbuild/SOURCES/

directory:

$ cd ~/rpmbuild/SOURCES
$ wget http://sourceforge.net/projects/boost/files/boost/1.59.0/boost_1_59_0.tar.gz

Move Files to the RPM Build Tree

Copy the source tarball:

$ cp percona-server-5.7.16-10custom.tar.gz ~/rpmbuild/SOURCES/

Also copy the 

./build-ps/rpm/*

 files:

$ cp ~/percona-server/build-ps/rpm/* ~/rpmbuild/SOURCES

If you have any patch files, move them to the

~/rpmbuild/SOURCES/

 directory as well:

$ cp ~/custom.patch ~/rpmbuild/SOURCES/

Then the

percona-server.spec

 file goes into the

~/rpmbuild/SPECS

 directory:

$ cp ~/percona-server/build-ps/percona-server.spec ~/rpmbuild/SPECS/

Setting Correct Versions in the Spec File

$ cd ~/rpmbuild/SPECS/
$ sed -i s:@@MYSQL_VERSION@@:5.7.16:g percona-server.spec
$ sed -i s:@@PERCONA_VERSION@@:10custom:g percona-server.spec
$ sed -i s:@@REVISION@@:f31222d:g percona-server.spec
$ sed -i s:@@RPM_RELEASE@@:1:g percona-server.spec
$ sed -i s:@@BOOST_PACKAGE_NAME@@:boost_1_59_0:g percona-server.spec

Note the

@@PERCONA_VERSION@@

 contains our

10custom

 version number.

You can add your changelog information in the

%changelog

 section.

Building RPM Packages

For Percona Server, making the RPMs is a two-step process. First, we need to make the SRPMS:

$ cd ~/
$ rpmbuild -bs --define 'dist .generic' rpmbuild/SPECS/percona-server.spec
Wrote: /home/vagrant/rpmbuild/SRPMS/Percona-Server-57-5.7.16-10custom.1.generic.src.rpm

And then we can build the binary RPMs from the SRPMS:

$ rpmbuild --define 'dist .el7' --rebuild
       rpmbuild/SRPMS/Percona-Server-57-5.7.16-10custom.1.generic.src.rpm
...
Wrote: /home/vagrant/rpmbuild/RPMS/x86_64/Percona-Server-server-57-5.7.16-10custom.1.el7.x86_64.rpm
Wrote: /home/vagrant/rpmbuild/RPMS/x86_64/Percona-Server-client-57-5.7.16-10custom.1.el7.x86_64.rpm
Wrote: /home/vagrant/rpmbuild/RPMS/x86_64/Percona-Server-test-57-5.7.16-10custom.1.el7.x86_64.rpm
Wrote: /home/vagrant/rpmbuild/RPMS/x86_64/Percona-Server-devel-57-5.7.16-10custom.1.el7.x86_64.rpm
Wrote: /home/vagrant/rpmbuild/RPMS/x86_64/Percona-Server-shared-57-5.7.16-10custom.1.el7.x86_64.rpm
Wrote: /home/vagrant/rpmbuild/RPMS/x86_64/Percona-Server-shared-compat-57-5.7.16-10custom.1.el7.x86_64.rpm
Wrote: /home/vagrant/rpmbuild/RPMS/x86_64/Percona-Server-tokudb-57-5.7.16-10custom.1.el7.x86_64.rpm
Wrote: /home/vagrant/rpmbuild/RPMS/x86_64/Percona-Server-57-debuginfo-5.7.16-10custom.1.el7.x86_64.rpm
Executing(%clean): /bin/sh -e /var/tmp/rpm-tmp.xmcI8e
+ umask 022
+ cd /home/vagrant/rpmbuild/BUILD
+ cd percona-server-5.7.16-10custom
+ /usr/bin/rm -rf /home/vagrant/rpmbuild/BUILDROOT/Percona-Server-57-5.7.16-10custom.1.generic.x86_64
+ exit 0
Executing(--clean): /bin/sh -e /var/tmp/rpm-tmp.UXUTmh
+ umask 022
+ cd /home/vagrant/rpmbuild/BUILD
+ rm -rf percona-server-5.7.16-10custom
+ exit 0

Once the build is done, you can find the RPMs in the RPMS directory:

$ ls -1 ~/rpmbuild/RPMS/x86_64/
Percona-Server-57-debuginfo-5.7.16-10custom.1.el7.x86_64.rpm
Percona-Server-client-57-5.7.16-10custom.1.el7.x86_64.rpm
Percona-Server-devel-57-5.7.16-10custom.1.el7.x86_64.rpm
Percona-Server-server-57-5.7.16-10custom.1.el7.x86_64.rpm
Percona-Server-shared-57-5.7.16-10custom.1.el7.x86_64.rpm
Percona-Server-shared-compat-57-5.7.16-10custom.1.el7.x86_64.rpm
Percona-Server-test-57-5.7.16-10custom.1.el7.x86_64.rpm
Percona-Server-tokudb-57-5.7.16-10custom.1.el7.x86_64.rpm

Oct
07
2016
--

GitHub is raising a secondary round

Workers install a billboard for GitHub Inc. in San Francisco, California, U.S., on Tuesday, Nov. 11, 2014. GitHub, which provides open-source code hosting services and has raised more than $100 million from investors, is among tech startups boosting demand for billboard space around Silicon Valley. Photographer: David Paul Morris/Bloomberg via Getty Images We’re hearing from several sources that a secondary financing round is in the works for GitHub, following its last $250 million financing round that valued it at $2 billion in July last year. However, there’s a little bit of interesting chatter beyond that they’re raising a secondary for potential liquidation of investors or employees, we hear. There are two parts to the… Read More

Feb
09
2016
--

GitHub Updates Its Enterprise Product With Clustering Support, Updated Design

2b38e278-8c46-11e5-8a25-06aa80342ad1 GitHub Enterprise, the company’s on-premises solution for managing code, is getting a major update today. It comes at a time when there seems to be some upheaval in the company around the importance management has been putting on this product. The marquee feature of GitHub Enterprise 2.5 is support for clustering. With this, businesses can now set up a cluster of GitHub Enterprise… Read More

Powered by WordPress | Theme: Aeros 2.0 by TheBuckmaker.com