jbogp, I'd be interested to chat with you about your experience interviewing with DeepMind (found that searching HN for DeepMind info). My contact info is on my user profile page. Sorry about the thread-jack.
Also very interesting, if you look at the high-res picture and zoom at the bottom right, you'll notice some sort of cable on the ground/boulder.
This could be the cable from one of the harpoons that may have fired but didn't anchor themselves, or it could be a feature attached to Philae that's in the field of vision.
The sunlight appears to be coming from behind and to the left of Philae in this picture, at a fairly low angle, so the back of it should (I hope) be reasonably illuminated at the moment.
It's definitely not clear from this picture which way is "up". It looks to me like up may be towards the top right of the picture, in which case Philae may be at a 45 degree angle. But I'm probably wrong - the full panorama should tell all.
After staring at the image a lot more, I realised I'm completely wrong. The leg reveals which way the lighting is coming from - above, slightly to the left, and probably slanting slightly into the camera. So my original interpretation of the orientation cannot be correct.
The real question is whether the picture should be rotated 90+ degrees clockwise.
The lander's leg is very well lit and there is clearly nothing underneath it. The antenna appears (based on it's end converging with it's own shadow) to be resting against either a rock or the surface.
I'm not a rocket scientist ... I hope I'm grossly misinterpreting this image!
People are growing out of things, thinking platforms are getting old but forgetting that sometimes it is simply them that are in fact struggling to adapt to fast evolving ways to use a concept as generic as Twitter.
I'm not saying that he is necessarily wrong about the recent changes in the timeline but it does sound a bit like "things were better when we were the kings of the hill"... you don't hear the Bieber and Lady Gaga fans complaining about Twitter...
>People are growing out of things, thinking platforms are getting old but forgetting that sometimes it is simply them that are in fact struggling to adapt to fast evolving ways to use a concept as generic as Twitter.
Yeah, but sometimes services, even as generic as Twitter, simply fade off too.
Having had a lot of my undergrad CS classes in Ada (in Toulouse France, the home of Airbus) I get nostalgic every time I see a post mentioning Ada on HN.
This is nice, easy to use. It is however only using an extremely limited subset of d3 capabilities, which makes me wonder what added value is brought by the fact of making this d3 based compared to other "pure" and lightweight chart libraries which have the same functions [1]
I find it's usually easier to roll my own solution, because these libraries often don't provide the level of customisation I need (or may need in the future). The cost of moving away from them is also nonzero.
I like flotcharts too. The Canvas2D rendering has some advantages too. Though, I would like to remove the jquery dependency. I tried to use zepto instead of jquery, it worked but zepto's codebase is ugly and it relies on outdated JS features that emit warnings in Firefox.
Having just had a 3h long technical interview for Google Deepmind, I cannot agree more with a lot of points raised in this post.
Deepmind being a machine learning/statistics/maths/computer science fuelled company, it made sense for the interview process to follow this simple organisation.
I was however very disappointed by the questions asked for each part. Not a single one of the ~100 questions asked during these 3h of my life demanded some "problem solving" skills, only encyclopaedic knowledge (describe this algorithm, what is a Jaccobian matrix, define what an artificial neural network is, what is polymorphism, give examples of classifiers, what are the conditions to apply a t-test...)
So what if someone doesn't remember every definition of the stats/ML/CS/Maths respective bibles as long as they're clever enough to look it up and understand quickly what's needed?
I mean, I get it these are very basic questions but as a highly qualified interviewee who necessarily has other offers given this set of skills, this fastidious, back to school, time wasting process does not reflect well on the company and makes me consider my other options even more seriously.
Do you think mathematicians start over from learning how to count everytime they encounter a new problem? That's different from looking up the 10th digit of pi: that's encyclopedic.
Knowing what a classifier is is simply more than encyclopedic knowledge that you should probably know before joining an AI company.
As I said before, yes these are very basic questions... and I'm not complaining that the questions were too hard or anything, I'm saying that finding someone who will be able to answer 100% of these "definition" questions will not tell you anything about how competent that person is...
Making them face a simple stats/CS/maths/ML problem and see if he/she is able to come up with the relevant concepts is far more interesting.
Ah, I see. Personally I think I could have taken the "give an example of a classifier" question into a deep dive of their competency (talk about how one would build that classifier, etc.) but they might get so many completely unqualified candidates that don't know those things that they need to have those sorts of hurdles before they start really evaluating you for more holistic problem solving traits.
But if you don't think they did that at all then I guess that's bad!
I generally am for the "look things up" argument, but so many people in tech take that to an extreme of "I can fully understand an entire discipline by looking at the Wikipedia page for 5 minutes".
I completely agree with you. Also to be fair towards Deepmind I should mention that this was the first round of interview. Hopefully the following steps will prove more stimulating, in which case 3h for the first step was maybe slightly too much.