Semantic Versioning and MinVersion/MaxVersion

Oct 24, 2010 at 9:19 PM

Currently, our .nuspec allow specifying dependencies using threw different attributes: version (exact match), MinVersion and MaxVersion.  Is this concept typically used alongside SemVer, or is the SemVer semantic typically sufficient such that a package only needs to specify an exact version?

e.g. if I have a dependency on Foo 1.2.3, SemVer implies that I can safely use anything from 1.2.0 to 1.x.y (where x > 2).  Do I have this right?

So with SemVer, specifying an exact version actually gets you a range.

Is this implied range sufficient, or do we think we also need packages the have the ability to specify their own range, with SemVer being applied to each end if the range?  What does Gems to here?

Oct 24, 2010 at 9:29 PM
What's interesting is that the version of a package is specified by the original package author, but a dependency on a package is specified by another package taking a dependency on the original package.

Hence I would imagine that if you specify a max/min version range, you would be overriding any SemVer semantics.

Perhaps, if you specify an exact match using the full version number, you get exact match, but if you only use the first two octects, you get semver. For example, if I say:

<dependency id="foo" version="1.1.1" /> Then I get an exact match on v1.1.1 of foo.
But if I do
<dependency id="foo" version="1.1" /> I get SemVer semantics. So that matches Foo 1.1.1, Foo 1.1.2, etc...


Oct 24, 2010 at 10:19 PM
I think we should take a close look at other package manager's semantics before jumping to make any decisions. Gems seem to have min and max but I'm not sure what their dependency resolution logic is.
Oct 24, 2010 at 10:27 PM

Phil, you left out something after "But if I do".  I assume you meant v1.1?

The problem with that is that people probably don't know they're using SemVer, so they're likely to specify the exact version of the dependency, e.g. Foo 1.1.1.  I don't think we want to block that from allowing them to bind to 1.1.2 (if 1.1.1 is not available), as that's sort of the whole point of it being SemVer.

David, yes, we should of course look at what others are doing.  I was under the impression from Rob that Gems was in fact using SemVer.  But I wasn't sure that Gems allows specifying Min/Max (you seem to say they do).  I'm fine generally saying that we follow the well-proven Gems model, but we need to be sure we understand it! :)

Oct 25, 2010 at 2:24 AM
With gems, if you have an explicit version and it is not there or no longer there, it fails.

Min/Max works great in gems because it is not a compiled language. Meaning things don't take a hard reference on a version enforced by the framework.

Because our "gems" are compiled dlls, we really want to consider how the Min/Max version dependency should work. Brendan Erwin and I had a conversation about this awhile back about whether you could do min/max version or just explicit versions with .NET compiled dlls.

I am under the impression (and maybe my machine just has a stricter policy) where if I don't have the exact assembly version of a reference that another library depends on, my application fails at runtime. The Assembly Resolver says it can't find a particular version if I am using a different version than a reference is using (unless of course I've successfully implemented binding redirects - see link below). The Fusion Log Viewer (fuslogvw.exe from Visual Studio command prompt) is a great reference for watching assembly binding logs to figure out how the framework resolves references and helps you get more information when you are getting assembly binding errors. Suzanne Cook has some really good notes on this: - Still applicable even today.

So like I started with, we should be really sure that the CLR will even let us do min/max version. There are scenarios where I would still want to use min/max, but it has a very limited scope to me (I would use it in a scenario like solution factory where I am templating and want to always bring in the latest versions).

Of course I could totally be smoking crack and I'm fine admitting that. It's just been my experience that when I'm using nhibernate (depends on castle.core and castle.windsor (depends on castle.core that if I don't specify a binding redirect, it fails miserably at run time (not at compile time).

Oct 25, 2010 at 2:26 AM
And when I said gems, I meant Ruby. Ruby is not a compiled language. lol
"Be passionate in all you do"

Oct 25, 2010 at 4:09 AM

Rob, what you describe with CLR version is indeed correct, and does make versioning harder than it would otherwise be.  In some cases, the problem can be avoided when the assembly doesn't have a strong name, but a lot of libraries do.

When they do, using a binding redirect is basically the answer.  Ideally, NuPack would be smart enough to include the right binding redirect during package install.  This is something we've discussed, but it can be complex to get right, so we decided not to attempt this for v1.

Yes, the end result might be that in most cases, specifying Min/Max doesn't make a lot of sense, unless we force the user to deal with the binding redirects themselves, which is a lot to ask!

Oct 25, 2010 at 4:21 AM

Min/Max can work with .NET assemblies as well. There are two cases to consider: 

  1. Non-strongly named assemblies.
  2. Strongly named assemblies

Non-strongly named assemblies: In this first case, min/max should work just fine (assuming the assemblies actually are compatible of course). When the assembly is not strongly named, an exact version match is not performed. So there's no problem in this case.

Strongly-named assemblies: In this case, the framework does indeed do an exact match. But in this case, there's a few potential choices. 

a) If you're the author of the assembly, you could consider a publisher policy. The problem is that this requires that both the assembly and the publisher policy assembly be in the GAC. This is a no-go for NuPack.

b) NuPack could inject a binding redirect automatically that matches up with the version range specified in the package. This should just work.

c) Perhaps we could lobby for another way to do this in a future version of the CLR. However, this would not be something we could take advantage of in a long while.

But obviously, even binding redirects can't solve the problem where the code simply isn't compatible and breaks at runtime. No amount of magic will fix that other than trying to push the various library authors to upgrade their own dependencies and gently point out that NuPack can help! In fact, hopefully, broader adoption of NuPack could actually help out with this situation that NuPack finds itself in. ;)

Oct 25, 2010 at 4:23 AM
Agreed. Broader adoption of package management will ultimately help OSS providers use releases only as dependencies and start bringing these pains into a greater light so that they can be fixed. :D
Oct 25, 2010 at 4:25 AM
You know, the behavior I described with assembly versioning issues is not with strong named assemblies. Just wanted to point that out. And like I said, I may have something turned on to a stricter setting on my machine. I need to set up a good example so we can all see what I mean.
"Be passionate in all you do"
Oct 25, 2010 at 4:28 AM

Right, Min/Max can make sense in theory, but as I wrote above, in practical scenario it may not apply much, because:

  • Most libraries are strong named (just look in our current feed)
  • We probably won't tackle automatically adding the binding redirect in v1.
Oct 25, 2010 at 4:30 AM

How can we know by looking at our feed how many libraries are strong named?

Oct 25, 2010 at 4:42 AM

Sorry, it's not that simple :)  I meant enlist in the nupackpackages repo and go looking around the various lib folders.  We could probably automate it and get some full statistics.  Actually we should, that would be interesting.

Oct 25, 2010 at 5:07 AM

Ha! Ok. My gut feeling is that more projects won't have strong names. I think the big well established projects will, like NHibernate etc. But smaller projects (like many of mine) won't bother. ;)

Oct 25, 2010 at 5:13 AM

Yes, you might be right.  I had looked at the big ones and they all are.  I just looked at two random ones (Sublogix and TwilioApi) and they were not.

Are we absolutely sure that the version is essentially meaningless in non-strong named assemblies? I heard that, but haven't confirmed it (and Rob says above that he thinks it's not).

Oct 25, 2010 at 5:28 AM

Yes, from an assembly loading perspective, it is. I confirmed it.

But from a practical perspective, it depends on a library. If a library from one version makes a breaking change, then it will fail at runtime. I believe that's what Rob was referring to when he pointed out that he ran into a runtime error. In that case, nothing will solve the problem, not even binding redirects.

Oct 25, 2010 at 5:35 AM

Great, thanks for confirming it.

I'm pretty sure we're only discussing the loader binding behavior here.  It goes without saying that breaking changes to code you rely on will break you and an unsolvable way :)  But that's where semantic versioning (and more generally the version range specification) comes in.  If you declare that you'll work with a certain range, than it's assumed that you will, other than getting locked out by the CLR strong naming behavior discussed here.

Overall, we're not in such a bad place I think.  But we will need to auto-manage binding redirects in web.config at some point in the future to do a good job in all scenarios.

Oct 25, 2010 at 6:30 AM
It works with binding redirects, but not until I set those up. What I'm saying is that everything appears fine until I go to run the application. Then it falls down..and dies miserably.

It could be that it's usually with log4net and NHibernate. And now that I validate my story, they have a public token as part of their clr octet (name, version, culture, public key token). So it could be I see the craziness just with these. I thought for a second it happened with something that was not strong named (a local library that we use). Hmmm.

Ignore me on this until I can validate/discount it.