The module cache in Go 1.11 can sometimes cause various errors, primarily if there were previously network issues or multiple go commands executing in parallel, where the workaround would then be to delete GOPATH/pkg (or run 'go clean -modcache'). I would guess your coworkers were seeing some variation of that.
I've had 2 coworkers who use different IDEs report that apparently their IDE is running `go` commands in parallel, because they'll get the corrupted module caches.
I think the initial vgo discussion and versioned Go modules proposal effectively said "developers expect semver; semver is helpful; Go is officially adopting and requiring semver". At the same time, I think there was also some discussion around "semver alone is not a complete answer". I can see how some of that might have been read as anti-semver, though I suspect that was not the intent.
In any event, it is exciting to see more energy going into the semver spec!
Hi Steve, I've looked at this relatively carefully, and I believe that Go does 100% follow the semver spec (modulo any bugs in the implementation).
However, while that happens to be my personal belief, I am not any type of world-class semver expert, nor am a semver maintainer, and hence always happy to learn more (especially from, say, an actual semver maintainer such as yourself ;-)
I think you are saying "-ish" in your comment above at least partly based the two chunks of linked code above, including this comment there:
// This package follows Semantic Versioning 2.0.0 (see semver.org)
// with two exceptions. First, it requires the "v" prefix. Second, it recognizes
// vMAJOR and vMAJOR.MINOR (with no prerelease or build suffixes)
// as shorthands for vMAJOR.0.0 and vMAJOR.MINOR.0.
It is true that Go treats the leading "v" as a requirement for VCS tags to be recognized as encoding a semantic version. That said, as far as I am aware, requiring the "v" prefix for VCS tags is allowed by the semver spec, and it might be possible to draw a distinction between a semver version (which does not include a "v") vs. how a given semver version is encoded into a VCS tag (where an encoding is allowed to include a "v", as far as I am aware). For example, there is an FAQ that was added a few years ago to 'master' at github.com/semver/semver that seemed to address this:
prefixing a semantic version with a "v" is a common way (in English) to indicate it is a version number. Abbreviating "version" as "v" is often seen with version control. Example: git tag v1.2.3 -m "Release version 1.2.3", in which case "v1.2.3" is a tag name and the semantic version is "1.2.3".[1]
I am also aware that a leading "v" vs. no leading "v" for VCS tags can trigger some impassioned debate, so I might regret posting this comment. ;-)
Regarding the second piece from that comment above -- that particular Go package does have the ability to parse something like "v1" or "v1.2" (without the three integers required by semver). However, the result is not interpreted as a valid semver version by the overall 'go' tool. For example, a VCS tag such as "v1.2" that the 'go' tool finds on a git repository will _not_ be interpreted as a semantic version (because it does not have the required three integers). However, being able to parse something like "v1" or "v1.2" is used for example as part of a version query mechanism. For example, you can do something like "go get foo@v1.2" as a way of asking "please give me the highest available release version in >= v1.2.0 and < v2.0.0". In other words, it is a short-hand for a particular type of version query, which I would guess would not be in violation of the current semver 2.0 spec? If interested, there is some more information about that query mechanism (which is called a "module query") in the Go doc[2].
Finally, here is a snippet from the Go doc[3] stating semver is used (and there is a link to https://semver.org in that section as well):
The go command requires that modules use semantic versions and expects that the versions accurately describe compatibility
I wouldn't be shocked if you have a different take on some or all of what I said above, but wanted to at least share my personal understanding...
So, primarily I am saying "ish" for two reasons: one, as I said below, I mis-understood how versions were actually used within Go. There's a lot of stuff out there, and keeping the three or four implementations I do know well is tough enough. Second, I don't want to say, definitively, that any particular implementation "does not implement Semver", because I think the spec is deficient enough that it's really hard to say in general.
On to your specific points:
> That said, as far as I am aware, requiring the "v" prefix for VCS tags is allowed by the semver spec
This is true, SemVer says nothing about VCSes.
> However, being able to parse parse something like "v1" or "v1.2" is used for example as part of a version query mechanism.
Right, so that's what I thought this was getting at, and the general "range" concept isn't in the spec, so all of that is fine, spec speaking.
However, that doc comment describes it a bit differently than you are. It describes them as version numbers. So it's possible that the doc comment is a bit mis-leading, maybe. That's very reasonable! This is why we need to clean up the spec text.
Makes sense. I guess the way I would summarize it is I believe the end-to-end Go system is 100% compliant with semver 2.0 spec. (And that is true to my knowledge even though there happens to be an internal-only 'semver' package that also contains Go-specific functionality related to semver, and that internal 'semver' package both must be used properly and is used properly to keep the overall end-to-end system semver spec compliant, including proper use of functions like isSemverPrefix(version) and similar to differentiate between what is allowed in a semver VCS tag vs. what is allowed for what is effectively a range-based query, etc.).
That's from sqlite. (Awesome tech, awesome license).
Related snippet from the "Distinctive Features Of SQLite" page[1] from the sqlite project:
The source code files for other SQL database engines typically begin with a comment describing your legal rights to view and copy that file. The SQLite source code contains no license since it is not governed by copyright. Instead of a license, the SQLite source code offers a blessing:
May you do good and not evil
May you find forgiveness for yourself and forgive others
May you share freely, never taking more than you give.
I would guess they are fairly focused on it at this point.
They are releasing microcode update mitigations for the CPUs of today, and at least state they will be improving things in the CPUs of the future, which is more-or-less what one might guess they would do with billions of dollars at stake.
That's not to say that they are going to magically get rid of all speculative execution, and I wouldn't try defending their PR approach, but one would guess they would at a bare minimum whittle away at the cost of mitigations.
Some related snippets about at least declared future intent. This obviously isn't a comprehensive list, but I think it suggests they realize the current state of affairs is not good for them:
From LKML[1] related to approach taken with the new microcode update for variant #2 being better/less costly in future CPUs:
Later CPUs are intended to have an 'IBRS all the time' feature which is
set-and-forget, and will perform much better, I believe. If we find
we're running on a CPU with that, we'll turn off the retpoline..."
And from today's Intel PDF regarding variant #2:
There are three new capabilities that will now be supported for this mitigation strategy.
These capabilities will be available on modern existing products if the appropriate microcode update is
applied, as well as on future products, where the performance cost of these mitigations will be
improved.
And from today's Intel PDF regarding variant #3:
Future Intel processors will also have hardware support for mitigating Rogue Data Cache Load.
And a related comment from the always reputable source of "some security guy on the internet"[2]:
Whatever mitigations CPU vendors come up with will be in concert with software changes. "Page table isolation" is an overnight redesign of all operating systems. It's here to stay. The next step is for Intel CPUs to fix its performance cost
As far as I can tell, one bit of news to me at least in this Intel whitepaper from today is that the microcode update to mitigate “variant #2” would be needed for Broadwell+, rather than the Skylake+ that had been stated yesterday on LKML. From Intel's PDF today:
"For Intel® Core™ processors of the Broadwell generation and later, this retpoline mitigation strategy also requires a microcode update to be applied for the mitigation to be fully effective."
vs. at least what I had seen on LKML list yesterday seemed to indicate Skylake+.
Sample snippet from LKML[1]:
"The x86 IBRS feature requires corresponding microcode support.
It mitigates the variant 2 vulnerability..."
and related sample snippet from LKML[2]:
"On Skylake the target for a 'ret' instruction may also come from the
BTB. So if you ever let the RSB (which remembers where the 'call's came
from get empty, you end up vulnerable.
Other than the obvious call stack of more than 16 calls in depth,
there's also a big list of other things which can empty the RSB,
including an SMI.
Which basically makes retpoline on Skylake+ very hard to use
reliably. The plan is to use IBRS there and not retpoline."
I'll confess I'm not 100% following all the ins and outs of this, but can anyone comment on any additional details regarding the Skylake+ vs. Broadwell+, and/or confirm if there was seemingly a change?
Presumably they've found a way to make retpoline work on Broadwell using a microcode update, which is probably better than the alternative of adding a very expensive kludged way of clearing the indirect branch cache in a microcode update.
Initial reaction to headline was it sounded like (a) no more python, and (b) this is a decided future direction.
Instead, it sounds like this is a proof-of-concept for flipping the main 'hg' command from being python + C extensions, to instead being a rust binary with an embedded python interpreter. Part of the rationale appears to be performance, but also smoothing out cross platform experience, especially on Windows.
Pulling out some related snippets:
-----
While Python is still a critical component of Mercurial and will be
for the indefinite future, I'd like Mercurial to pivot away from
being pictured as a "Python application" and move towards being
a "generic/system application." In other words, Python is just
an implementation detail.
-----
Desired End State
hg is a Rust binary that embeds and uses a Python interpreter when appropriate (hg is a Python script today).
Python code seemlessly calls out to functionality implemented in Rust.
Fully self-contained Mercurial distributions are available (Python is an implementation detail / Mercurial sufficiently independent from other Python presence on system)
-----
"Standalone Mercurial" is a generic term given to a distribution
of Mercurial that is standalone and has minimal dependencies on
the host (typically just the C runtime library). Instead, most of
Mercurial's dependencies are included in the distribution. This
includes a Python interpreter.
-----
This patch should be considered early alpha and RFC quality.
python has a concept of "extending" and also "embedding". It looks like they are looking at embedding[0], which enables you use the normal CPython interpreter from within another program. (So no, not writing a new Python interpreter in Rust).
Sample snippet from python docs:
-----
So if you are embedding Python, you are providing your own main program. One of the things this main program has to do is initialize the Python interpreter. At the very least, you have to call the function Py_Initialize(). There are optional calls to pass command line arguments to Python. Then later you can call the interpreter from any part of the application.
There are several different ways to call the interpreter: you can pass a string containing Python statements to PyRun_SimpleString(), <...etc..>
If interested, you can see their work-in-progress main.rs in the related code revision[0], which includes their Rust code calling down to the C function Py_Initialize() to spin up the now-embedded CPython interpreter that is living "inside" a Rust program:
unsafe {
Py_Initialize();
PySys_SetArgv(args.len() as c_int,
argv.as_ptr());
PyEval_InitThreads();
let _thread_state = PyEval_SaveThread();
}
After an intro ad, Stonebreaker gives the semi-common "3 V's" definition of big data of "volume, velocity, variety" popularized by META/Gartner [0]. And then he talks briefly about using big data for integration from many data sources, and then he concludes by relaying the interest that the Miller Beer company expressed in knowing the relationship between El Nino / temperature / precipitation and sales of beer.
If so, that is addressed for Go 1.12:
https://github.com/golang/go/issues/26794