-
Notifications
You must be signed in to change notification settings - Fork 2
Description
That could work. I didn't bother with any kind of optimisation because I assumed git itself would be doing something similar when doing the fetch.
Thinking about what you said here, I think we could simplify things by taking advantage of what what a git fetch does. Instead of manually comparing local and remote refs, we could do a fetch and use the fetched refs at the definitive state. This would mean we would only ever hit the network once.
It would look something like this:
- Copy all remote
pullandheadsrefs to our own local cache (I'm usingrefs/git-pras the base so people would know which tool is creating the refs).
git fetch {remoteName} +refs/pull/*/merge:refs/git-pr/{remoteName}/pull/*/merge +refs/heads/*:refs/git-pr/{remoteName}/heads/* --prune
-
Populate our ref dictionary using all of the refs under
refs/git-pr/{remoteName}. -
Fix up our
pullrefs using the commit atHEAD^1trick
Further down the line we could give the user the option to skip the fetch completely and simply use the cached refs (this would be useful on large/active repositories).
What do you think? What is the sun doing where you are? 😉
Originally posted by @jcansdale in #21 (comment)