New GitHub Username Shirts in the Shop

Our newest shirt comes in two colors and makes it socially acceptable to write on your clothing with your GitHub username or project name.

Username Shirts

Available in the GitHub Shop.

Git 2.3 has been released

The Git developers have just released a major new version of the Git command-line utility, Git 2.3.0.

As usual, this release contains many improvements, performance enhancements, and bug fixes. Full details about what's included can be found in the Git 2.3.0 release notes, but here's a look at what we consider to be the coolest new features in this release.

Push to deploy

One way to deploy a Git-based web project is to keep a checked-out working copy on your server. When a new version is ready, you log into the server and run git pull to fetch and deploy the new changes. While this technique has some disadvantages (see below), it is very easy to set up and use, especially if your project consists mostly of static content.

With Git 2.3, this technique has become even more convenient. Now you can push changes directly to the repository on your server. Provided no local modifications have been made on the server, any changes to the server's current branch will be checked out automatically. Instant deploy!

To use this feature, you have to first enable it in the Git repository on your server by running

$ git config receive.denyCurrentBranch updateInstead

When shouldn't you use push-to-deploy?

Deploying by pushing to a Git repository is quick and convenient, but it is not for everybody. For example:

  • Your server will contain a .git directory containing the entire history of your project. You probably want to make extra sure that it cannot be served to users!
  • During deploys, it will be possible for users momentarily to encounter the site in an inconsistent state, with some files at the old version and others at the new version, or even half-written files. If this is a problem for your project, push-to-deploy is probably not for you.
  • If your project needs a "build" step, then you will have to set that up explicitly, perhaps via githooks.

See how this feature was implemented

Faster cloning by borrowing objects from existing clones

Cloning a remote repository can involve transferring a lot of data over the network. But if you already have another local clone of the same repository, it probably already has most of the history that the new clone will need. Now it is easy to use those local objects rather than transferring them again:

$ git clone --reference ../oldclone --dissociate https://github.com/gitster/git.git

The new --dissociate option tells Git to copy any objects it can from local repository ../oldclone, retrieving the remainder from the remote repository. Afterwards, the two clones remain independent; either one can be deleted without impacting the other (unlike when --reference is used without --dissociate).

See how this feature was implemented

More conservative default behavior for git push

If you run git push without arguments, Git now uses the more conservative simple behavior as the default. This means that Git refuses to push anything unless you have defined an "upstream" branch for your current branch and the upstream branch has the same name as your current branch. For example:

$ git config branch.autosetupmerge true
$ git checkout -b experimental origin/master
Branch experimental set up to track remote branch master from origin.
Switched to a new branch 'experimental'
$ git commit -a -m 'Experimental changes'
[experimental 43ca356] Experimental changes
$ git push
fatal: The upstream branch of your current branch does not match
the name of your current branch.  To push to the upstream branch
on the remote, use

    git push origin HEAD:master

To push to the branch of the same name on the remote, use

    git push origin experimental

$

The new default behavior is meant to help users avoid pushing changes to the wrong branch by accident. In the case above, the experimental branch started out tracking master, but the user probably wanted to push the experimental branch to a new remote branch called experimental. So the correct command would be git push origin experimental.

The default behavior can be changed by configuring push.default. If you want to go back to the version 1.x behavior, set it to matching:

$ git config --global push.default matching

See how this feature was implemented

More flexible ssh invocation

Git knows how to connect to a remote host via the SSH protocol, but sometimes you need to tweak exactly how it makes the connection. If so, you can now use a new shell variable, GIT_SSH_COMMAND, to specify the command (including arguments) or even an arbitrary snippet of Shell code that Git should use to connect to the remote host. For example, if you need to use a different SSH identity file when connecting to a Git server, you could enter

$ GIT_SSH_COMMAND='ssh -i git_id' git clone host:repo.git

See how this feature was implemented

The credential subsystem is now friendlier to scripting

When Git needs a password (e.g., to connect to a remote repository over http), it uses the credential subsystem to query any helpers (like the OS X Keychain helper), and then finally prompts the user on the terminal. When Git is run from an automated process like a cron job, there is usually no terminal available and Git will skip the prompt. However, if there is a terminal available, Git may hang forever, waiting for the user to type something. Scripts which do not expect user input can now set GIT_TERMINAL_PROMPT=0 in the environment to avoid this behavior.

See how this feature was implemented

Other

Some other useful tidbits:

  • Now Git is cleverer about not rewriting paths in the working tree unnecessarily when checking out particular commits. This will help reduce the amount of redundant work done during software builds and reduce the time that incomplete files are present on the filesystem (especially helpful if you are using push-to-deploy). See how this feature was implemented
  • Now git branch -d supports a --force/-f option, which can be used to delete a branch even if it hasn't been merged yet. Similarly, git branch -m supports --force/-f, which allows a branch to be renamed even if the new name is already in use. This change makes these commands more consistent with the many other Git commands that support --force/-f. See how these features were implemented

Additional resources

Don't forget: an important Git security vulnerability was fixed last December. If you haven't upgraded your Git client since then, we recommend that you do so as soon as possible. The new release, 2.3.0, includes the security fix, as do the maintenance releases 1.8.5.6, 1.9.5, 2.0.5, and 2.1.4, which were released in December.

Keeping GitHub OAuth Tokens Safe

While making your source code available in a public GitHub repository is awesome, it's important to be sure you don't accidentally commit your passwords, secrets, or anything else that other people shouldn't know.

Starting today you can commit more confidently, knowing that we will email you if you push one of your OAuth Access Tokens to any public repository with a git push command. As an extra bonus, we'll also revoke your token so it can't be used to perform any unauthorized actions on your behalf.

For more tips on keeping your account secure, see "Keeping your SSH keys and application access tokens safe" in GitHub Help.

Get ready for GitHub Universe, October 1-2 in San Francisco

GitHub Universe

GitHub is planning a conference like we've never planned before. Get ready for GitHub Universe – part festival, part conference, all for anyone who cares about making great software. From independent developers to large teams, open source to commercial apps and services: we're bringing together every part of the community to discuss how to design, build, and ship software.

Join us and over a thousand GitHub fans for two days of amazing community, industry-leading speakers, in-depth training, immersive activities, and the latest GitHub announcements.

Mark your calendar!

  • When: October 1-2, 2015
  • Where: Pier 70, San Francisco, CA

Stay in the know!

Between now and October, we'll be rolling out updates here on the GitHub blog and over on the GitHub Universe conference website. You can also sign up to get updates about the conference, including notifications when tickets go on sale and ongoing news about speakers and activities.

:rocket:

GitHub Security Bug Bounty program turns one

It's already been a year since we launched the GitHub Security Bug Bounty, and, thanks to bug reports from researchers across the globe, 73 previously unknown security vulnerabilities in our applications have been identified and fixed.

Bugs squashed

Of 1,920 submissions in the past year, 869 warranted further review, helping us to identify and fix vulnerabilities fitting nine of the OWASP top 10 vulnerability classifications. 33 unique researchers earned a cumulative $50,100 for the 57 medium to high risk vulnerabilities they reported.

Bounty submissions per week

We also saw some incredibly involved and creative vulnerabilities reported.

Our top submitter, @adob, reported a persistent DOM based cross-site scripting vulnerability, relying on a previously unknown Chrome browser bug that allowed our Content Security Policy to be bypassed.

Our second most prolific submitter, @joernchen, reported a complex vulnerability in the communication between two of our backend services that could allow an attacker to set arbitrary environment variables. He followed that up by finding a way to achieve arbitrary remote command execution by setting the right environment variables.

New year, higher payouts

To kick off our Bug Bounty Program's second year, we're doubling the maximum bounty payout, from $5000 to $10000. If you've found a vulnerability that you'd like to submit to the GitHub security team for review, send us the details, including the steps required to reproduce the bug. You can also follow @GitHubSecurity for ongoing updates about the program.

Thanks to everyone who made the first year of our Bug Bounty a success. Happy hunting in 2015!

Git Merge returns April 8-9th in Paris

Git will be 10 years old in April, and we're bringing back Git Merge to celebrate. Mark your calendars for April 8-9th to be a part of the only Git user conference of its kind.

Hosted at the La Gaîté lyrique in Paris' 3rd arrondissement, Git Merge will feature with sessions on using Git, scaling Git, and developing on Git from core Git maintainers.

La Gaîté lyrique

Tickets, session details, and hotel information will be available soon. Follow @github on Twitter for updates, or add your email to the list at git-merge.com and we'll let you know as soon as tickets are on sale.

Et voilà!

How to write the perfect pull request

As a company grows, people and projects change. To continue to nurture the culture we want at GitHub, we've found it useful to remind ourselves what we aim for when we communicate. We recently introduced these guidelines to help us be our best selves when we collaborate on pull requests.

Approach to writing a Pull Request

  • Include the purpose of this Pull Request. For example:
    This is a spike to explore…
    This simplifies the display of…
    This fixes handling of…
  • Consider providing an overview of why the work is taking place (with any relevant links); don’t assume familiarity with the history.
  • Remember that anyone in the company could be reading this Pull Request, so the content and tone may inform people other than those taking part, now or later.
  • Be explicit about what feedback you want, if any: a quick pair of :eyes: on the code, discussion on the technical approach, critique on design, a review of copy.
  • Be explicit about when you want feedback, if the Pull Request is work in progress, say so. A prefix of “[WIP]” in the title is a simple, common pattern to indicate that state.
  • @mention individuals that you specifically want to involve in the discussion, and mention why. (“/cc @jesseplusplus for clarification on this logic”)
  • @mention teams that you want to involve in the discussion, and mention why. (“/cc @github/security, any concerns with this approach?”)

Offering feedback

  • Familiarize yourself with the context of the issue, and reasons why this Pull Request exists.
  • If you disagree strongly, consider giving it a few minutes before responding; think before you react.
  • Ask, don’t tell. (“What do you think about trying…?” rather than “Don’t do…”)
  • Explain your reasons why code should be changed. (Not in line with the style guide? A personal preference?)
  • Offer ways to simplify or improve code.
  • Avoid using derogatory terms, like “stupid”, when referring to the work someone has produced.
  • Be humble. (“I’m not sure, let’s try…”)
  • Avoid hyperbole. (“NEVER do…”)
  • Aim to develop professional skills, group knowledge and product quality, through group critique.
  • Be aware of negative bias with online communication. (If content is neutral, we assume the tone is negative.) Can you use positive language as opposed to neutral?
  • Use emoji to clarify tone. Compare “:sparkles: :sparkles: Looks good :+1: :sparkles: :sparkles:” to “Looks good.”

Responding to feedback

  • Consider leading with an expression of appreciation, especially when feedback has been mixed.
  • Ask for clarification. ("I don’t understand, can you clarify?")
  • Offer clarification, explain the decisions you made to reach a solution in question.
  • Try to respond to every comment.
  • Link to any follow up commits or Pull Requests. (“Good call! Done in 1682851”)
  • If there is growing confusion or debate, ask yourself if the written word is still the best form of communication. Talk (virtually) face-to-face, then mutually consider posting a follow-up to summarize any offline discussion (useful for others who be following along, now or later).

These guidelines were inspired partly by Thoughtbot's code review guide.

Our guidelines suit the way we work, and the culture we want to nurture. We hope you find them useful too.

Happy communicating!

Announcing GitHub Enterprise 2.1.0

hero-2-1-release

It's a new year and we couldn't think of a better way to start it off than with a new release of GitHub Enterprise. We've included a number of highly-requested features, along with some of the best stuff recently shipped on GitHub.com - all to give developers and admins the best tools to build and ship software at work.

Let's talk about some of the features you'll find in this release.

Automate user and team management with LDAP Sync

Many of you have told us that you want it to be easier to use GitHub Enterprise with LDAP, especially for organizations managing lots of users. With this release, GitHub Enterprise integrates with your LDAP directory more deeply than ever before, automating identity and access management for your organization. This means you can provision and deprovision user accounts in GitHub Enterprise directly from LDAP with user sync, and automatically grant users access to repositories with team sync. While we were at it, we also improved LDAP performance across the board, increasing reliability and throughput.

Deploy GitHub Enterprise on OpenStack KVM

One of our goals with last year's rebuild of GitHub Enterprise was to make it available in more of the environments where you want to run it, whether you're managing your infrastructure on servers you own or on an internal cloud-based platform. That's why we're excited to announce that with this release, GitHub Enterprise is available on OpenStack KVM, in addition to Amazon Web Services and VMware. If your tech stack is built on KVM, you can now easily set up GitHub Enterprise and integrate with other parts of your internal system.

Audit all user actions across your instance

The Organization Audit Log that shipped with the November release of GitHub Enterprise has now been expanded to the instance level, giving administrators a skimmable and searchable record of every action performed across GitHub Enterprise in the past 90 days. Events like repository creation, team deletion, the addition of webhooks, and more are surfaced in a running log, along with information about who performed the action and when it occurred. These events can be filtered for deeper analysis, and you can create a wide range of custom search queries to make sure you're always aware of what's taking place on your instance.

audit-log

Monitor the performance of GitHub Enterprise

If you're administering GitHub Enterprise, you should be able to identify whether your instance is performing correctly and quickly locate what's wrong when it isn't. With the new Instance Monitoring Dashboard, you now can. With data displayed for things like data disk usage, memory, CPUs, and more, you'll be able to answer questions like:

  • Are my users experiencing errors?
  • Are things fast or slow for my users?
  • What is a typical traffic pattern? What is abnormal?
  • Should I upgrade CPU, memory, or IO to improve the performance of my instance?
  • When should I plan to increase my disk space given my current growth rate?

monitoring-dashboard

Even more betterness

GitHub Enterprise 2.1.0 also includes:

To see the full list of features and bug fixes, check out the release notes for GitHub Enterprise 2.1.0.

Take 2.1.0 for a spin

If you're an existing GitHub Enterprise customer, you can download the latest release from the GitHub Enterprise website. If you want to give GitHub Enterprise a try, start a 45-day free trial on OpenStack KVM, AWS, or VMware.

Organization-approved applications

Applications integrate with GitHub to help you and your team build, test, and deploy software. But not all apps are created equal. By adopting a list of approved applications, organization admins can better manage which apps can be given access to their organization's data.

Approve trusted applications

If you're administering an organization on GitHub.com, you can set up a whitelist of trusted third-party applications.

organization-approved-applications

With this protection in place, all applications need your explicit approval before they can access your organization's resources. You can grant access to your favorite continuous integration service (for example), while ignoring other applications that you may not trust or need.

Request your favorite tools

If you're a member of an organization and have a third-party application that you want to use, simply ask your organization's admins to approve access. They can then review the requested application to decide whether it should have access to your organization's data.

request-organization-approval-for-an-app

For more information on setting up a list of approved applications for your organization, be sure to check out the docs.

If you develop an app that integrates with GitHub, check out the Developer Blog for our latest recommendations on working with organizations and their data.

Create Pull Requests with GitHub for Mac

Pull requests are fantastic. We use them every day to review and discuss code, documentation, and designs. Now you can create pull requests without leaving the warm embrace of GitHub for Mac.

Create pull requests

We've also made forks easier to work with. Forked repositories now automatically fetch their upstream repository, and its branches can be checked out or merged. No more futzing with the command line or multiple remotes!

Check out the upstream's branch

Download GitHub for Mac and start sending pull requests!

Quick Pull Requests

Starting conversations around changes is what pull requests and GitHub Flow are all about, so we’re excited to introduce a powerful shortcut that gets you there even faster.

When using your browser to edit a file on GitHub.com, the web-based commit composer lets you quickly propose a change to a new branch and then immediately open a pull request for discussion and review:

Selecting the new branch option to open a quick pull request

Reducing the time it takes to open a pull request lowers the contribution barrier, and having this workflow available entirely within the browser makes collaboration more approachable for people with all technical skill levels.

To learn how GitHub Flow works, and whether it might be a good workflow to use on your projects, check out our guide on Understanding GitHub Flow.

Partial commits in GitHub for Windows

Ever found yourself in a situation where your working directory contains a mix of changes that don't quite fit together? It would be easy to commit it all at once and move on; however, small, focused commits are great for making it easy to review and discuss a branch of work - especially when working on a complex codebase.

But how can you choose which changes to use in a commit?

The newest release of GitHub for Windows supports selecting lines or blocks of changes when creating a commit. Simply click the desired lines in the gutter, create the commit, and leave the other changes for you to continue working on.

Create a partial commit

For people familiar with the command line, this change is similar to interactive staging using git add -i or git add -p.

How GitHub uses GitHub to document GitHub

Providing well-written documentation helps people understand, make use of, and contribute back to your project, but it's only half of the documentation equation. The underlying system used to serve documentation can make life easier for the people writing it—whether that's just you or the team you work with.

The hardest part about documentation should be deciding which words to use, not configuring tools or figuring out how to deploy updates. Members of the GitHub Documentation Team come from backgrounds where crude XML-based authoring tools and complicated CMSs are the norm. We didn't want to use those tools here so we've spent a good deal of time configuring our own documentation workflow and set-up.

We've talked before about how we use GitHub to build GitHub; here's a look at how we use GitHub Pages to serve our GitHub Help documentation to millions of readers each month.

Our previous setup

A few months ago, we migrated our Help site from a custom-built Rails app to a static Jekyll site hosted on GitHub Pages. Our previous Help site consisted of two separate repositories:

  • A Rails application, which was responsible for managing the site, the assets, and the search implementation.
  • The actual content, which was just a grouping of Markdown files.

Our Rails app was hosted on a third-party service; as updates were made to the code, we deployed them with Hubot and Chatops, as we do with the rest of GitHub.

Our typical writing workflow looked like this:

  • The Documentation Team took note when a new feature was shipping.
  • We'd create a new issue to track the feature.
  • When we were ready, we'd open a pull request to start iterating on the content.
  • When the content was in a good place, we'd @mention the team (@github/docs) and have a peer editor review our words.
  • When the feature was ready to ship, we'd merge the pull request. A webhook would fire from the content repository to our hosted Rails app; the webhook's payload updated a database row containing the article's raw Markdown.

Here's an example conversation from @neveret and @bernars showing a bit of our normal editing workflow:

Sample conversation

Working with pull requests was fantastic, because it directly matched the GitHub flow we use across the company. And we liked writing in Markdown, because its syntax enabled us to effectively describe new features in no time.

However, our Rails implementation was a fairly complicated setup:

  • Our reliance on an external host required dedicated employees on our Engineering, Ops, and Security teams to monitor the site and respond to incidents as they arose.
  • Our Documentation team couldn't easily view local changes to the content. Even though we wrote in Markdown, we'd still need to set up a local instance of the Rails app and run a script to import the content into a database, just to see how it would look on the site.
  • We were constantly tweaking the Rails server, but noticed that each request a reader made to the site was still slow. The HTML was being generated on-the-fly, requiring calls to the database and constantly iterating on stronger caching strategies.

We knew we could do much better.

Our new setup

When Jekyll 2.0 was released, we saw an opportunity to replace our existing setup with a static site. The new Collections document type lets you define a file structure that matches your needs. In addition, Jekyll 2.0 introduced support for Sass and CoffeeScript assets, which simplifies writing front-end code.

Open source is great because it's, well, open. As we migrated to Jekyll, we made several pull requests to components of Jekyll, making it a better tool for users of GitHub Pages.

Very little of our original workflow has changed. We still write in Markdown and we still open pull requests for an editorial review. When the pull request is merged, the GitHub Pages site is automatically built and deployed within seconds.

Here's a quick rundown on how we're using core Jekyll features and a handful of plugins to implement the help site.

Gems we use

We intentionally rely on core Jekyll code as much as possible, to minimize our reliance on maintaining custom plugins.

Jekyll 2.0 introduced a new plugin type called a Converter that transforms any markup into HTML. This frees the writer up to compose content however she chooses, and Jekyll will just serve the final HTML. For example, you can write your posts in AsciiDoc, if that's your thing.

To that end, we wrote jekyll-html-pipeline, an implementation of our own open-source html-pipeline. This ensures that the content on our Help site looks the same as content everywhere on GitHub. We also wrote our own Markdown filter to provide some syntax extensions that make writing documentation much easier.

Search

With the previous Rails site, we were using an ElasticSearch provider that indexed our database and implemented a search system for our Help site.

Now, we use lunr-js to provide a faster client-side search experience. In sifting through our analytics, we found that the vast majority of our users relied on an external search provider to get to our documentation. It didn't make sense, during or after the migration, to expend much energy on a server-side search solution.

Content references

The Docs team really wanted to use "content references," or conrefs, when writing documentation. A conref allows you to write a chunk of text once and reuse it throughout the site. (The idea was borrowed from the DITA standard.)

The old Rails app wouldn't permit us to write reusable content, but now we can with the power of Jekyll's data files. For example, we've defined a file called conrefs.yml, and have a set of key-value strings that look something like this:

repositories:
  create_new:
    1. In the upper-right corner of any page, click {{ octicon-plus Plus symbol }}, and then click **New repository**.
      ![New repository menu](/assets/images/help/repository/repo-create.png)

Our keys are grouped by specificity (repositories.create_new); the values they contain are just plain Markdown ("In the upper-right corner..."). We can now reuse this single step across several pages of content that refer to creating a new repository by writing the appropriate Liquid syntax:

To start the process:

{{ site.data.conrefs.repositories.create_new }}
2. Do something else.
3. You're done!

As GitHub's UI evolves, we might need to change the image or rewrite the directional pointer. With a conref, we only have to make the change in one location, rather than a dozen.

Versioned documentation

Another goal of the move was to be able to provide versioned Help documentation. With the release of Enterprise 2.0.0, we began to provide different content sets for the previous 11.10.340 and the current 2.0 releases. In order to do that, we build the Jekyll site with a special audience flag, and check in the generated HTML as part of our Pages repository.

For example, in our config.yml file, we set a key called audience to 11.10.340. If a feature exists that's available in Enterprise 2.0 but not 11.10.340, we demarcate the section using Liquid tags like this:

{% if site.audience != '11.10.340' %}

This new feature...

{% endif %}

Again, this is just taking advantage of core features in Jekyll; we didn't need to build or maintain any aspect of this.

Testing our site

Just because the site is static doesn't mean that we should avoid test-driven development writing.

Our first line of defense for testing content has always been html-proofer. This tool helps verify that none of our links and images are broken by quickly validating every URL in our built site.

Rubyists are familiar with using Capybara to simulate website interactions in their tests. Would it be crazy to implement a similar idea with our static site? Nope! Our own @bkeepers wrote a blog post four years ago talking about this very problem. With that, we were able to write stronger tests that covered our content and our site behavior. For example, we check that a referenced conref is valid (by looking up the key in the YAML file) or that our JavaScript is functioning properly.

Our Help documentation runs with CI to ensure that nothing broken ever gets in front of our readers:

Our help-docs CI build

Speed

As mentioned above, our new Pages implementation is significantly faster than the old Rails app. This is partly because the site is a bunch of static HTML files—nothing is fetched from a database. More significantly, we've already spent a lot of time configuring our Pages servers to be blazing fast for everyone. The same advantages we have, like serving assets off of a CDN, are also available to every GitHub user.

Help docs GA site load times

Making GitHub Pages work for you

Documentation teams across GitHub can take advantage of the GitHub Flow, Jekyll 2.0, and GitHub Pages to produce high-quality documentation. The benefits that GitHub Pages provides to our Documentation team is already available to any user running a GitHub Pages site.

With our move to Pages, we didn't rebuild any new components. We spent far less time building anything and more time discussing a workflow that made sense for our team and company. By committing to using the same hosting features we provide to every GitHub user, we were able to provide better documentation, faster. Our internal workflow has made us more productive, and enabled us to provide features we never could before, such as versioned content.

If you have any questions on our setup, past or present, we're happy to help!

Improving GitHub's SSL setup

To keep GitHub as secure as possible for every user, we will remove RC4 support in our SSL configuration on github.com and in the GitHub API on January 5th 2015.

RC4 has a number of cryptographic weaknesses that may be exploited, impacting the security of your data. More details about these vulnerabilities are listed in the current IETF draft.

If you are using Internet Explorer on Windows XP, you will no longer be able to access github.com once this change takes place. Windows XP only supports outdated SSL ciphers, is no longer supported by Microsoft, and contains a known critical security problem in its SSL implementation.

We strongly recommend that Windows XP users upgrade to a newer version of Windows. If this is not possible, you will need to use Chrome or Firefox to access GitHub on Windows XP. The git client available at git-scm.com still works on Windows XP.

Vulnerability announced: update your Git clients

A critical Git security vulnerability has been announced today, affecting all versions of the official Git client and all related software that interacts with Git repositories, including GitHub for Windows and GitHub for Mac. Because this is a client-side only vulnerability, github.com and GitHub Enterprise are not directly affected.

The vulnerability concerns Git and Git-compatible clients that access Git repositories in a case-insensitive or case-normalizing filesystem. An attacker can craft a malicious Git tree that will cause Git to overwrite its own .git/config file when cloning or checking out a repository, leading to arbitrary command execution in the client machine. Git clients running on OS X (HFS+) or any version of Microsoft Windows (NTFS, FAT) are exploitable through this vulnerability. Linux clients are not affected if they run in a case-sensitive filesystem.

We strongly encourage all users of GitHub and GitHub Enterprise to update their Git clients as soon as possible, and to be particularly careful when cloning or accessing Git repositories hosted on unsafe or untrusted hosts.

Repositories hosted on github.com cannot contain any of the malicious trees that trigger the vulnerability because we now verify and block these trees on push. We have also completed an automated scan of all existing content on github.com to look for malicious content that might have been pushed to our site before this vulnerability was discovered. This work is an extension of the data-quality checks we have always performed on repositories pushed to our servers to protect our users against malformed or malicious Git data.

Updated versions of GitHub for Windows and GitHub for Mac are available for immediate download, and both contain the security fix on the Desktop application itself and on the bundled version of the Git command-line client.

In addition, the following updated versions of Git address this vulnerability:

  • The Git core team has announced maintenance releases for all current versions of Git (v1.8.5.6, v1.9.5, v2.0.5, v2.1.4, and v2.2.1).

  • Git for Windows (also known as MSysGit) has released maintenance version 1.9.5.

  • The two major Git libraries, libgit2 and JGit, have released maintenance versions with the fix. Third party software using these libraries is strongly encouraged to update.

More details on the vulnerability can be found in the official Git mailing list announcement and on the git-blame blog.