Hacker Newsnew | past | comments | ask | show | jobs | submit | supar's commentslogin

I cannot believe that people can have that question. Or, well, I do believe it, but then my second question usually is: did you ever use linux/bsd or any OSS project in the past?

I don't want to be rude at all, but the suggestion of "contributing to an OSS project" makes a lot of sense if you already had to work with/use OSS software. Because, if you had to use one of these projects, you would probably already understand the most important social aspect of it. Coding, IMHO, is just secondary (and is not necessary at all).

Thus my suggestion would be: if you never dealt with an OSS project before, try to find some OSS software which you genuinely like and try to use it (and well) and follow its development. Once you did, you will certainly know how to contribute. There's nothing more to it.

If you are already familiar with OSS, but so far never found anything "interesting", the best thing to do IMHO would be starting your own. Release something that you did and that you would like others to use.

Most importantly, do all of that for fun. Don't do that because you have to and because they said "it would be helpful". Helpful for what? Coding style\quality varies wildly, as is the community around the project itself.

The biggest difference between a "professional" and OSS project is exactly this: people work on OSS project for many reasons, but it's mostly for fun or passion. Some projects strive for quality, some for functionality, and some just solve an "itch" somebody had. Understanding the social aspect, again, is the biggest differentiating factor.

There's no point in contributing to OSS unless:

1) you released something that you like to maintain 2) maintain something somebody else released 3) fix an itch you have 4) having fun coding (or any other activity around said project)


I won't address the "I cannot believe" part...

>If you are already familiar with OSS, but so far never found anything "interesting", the best thing to do IMHO would be starting your own

I couldn't disagree more. If you haven't found a project that is "interesting" or simply useful, then you're not really programming (or perhaps aren't looking in the right place?). There are SO many useful projects, or semi-useful projects (and lots of not very useful ones) that there's got to be at least one (if not dozens) that would be interesting.

Regardless, if one couldn't find an interesting one to contribute to (at least a little bit), starting a new one would be a bad idea because, unless it's really unique or compelling, won't get a lot of attention and you won't get any good feedback on what you're doing (which is critical if you want to learn).

I do agree that the social aspects of contributing to and/or running an OSS project is important. So even if you're not contributing code to a project, if you're filing issues or making comments on other issues, that's a great way to learn.


If you are not familiar with Linux (for example, you're working on Windows or mostly doing web-development), it could entirely be. Mac OS is more friendly, but I certainly saw many developers that didn't scratch the surface and directly went with XCode/ObjC/iOS. In that sense, you can build a career without ever considering OSS development/usage.

What's best for somebody that, as of now, is questioning which project should be looking at? If you don't know what you're looking for, any advice is just as good as a google or github search (ie: useless).

If you start by having fun, even by publishing your random projects, you will be dragged in by dependencies (that's ironic). I would rather recommend choosing something fun to work with than a random project to look at.


I can definitely agree with the social aspect of oss and contributing to oss. my contributions have come as a direct result of interacting with various people in the community casually, and finding out about things I can continue to along the way serendipitously.


Not sure what you mean by "map data". You can certainly have a list of props that are in the way and have them as a mesh. It's certainly easier if you have only one prop, but you have to account for that they may stack together in any way- and suddently the geometry configuration becomes non-trivial again.


I got an HP EliteBook Folio 9470m at work (not my personal choice), but I though I should share my impressions.

I was genuinely impressed by the fact that this is the only ultrabook I've seen with a swappable battery. Yes!

The ultrabook in itself is fine, and the build quality is excellent. 8gb of ram, 256gb SSD, HD4000 graphics. I wasn't able to boot the latest ubuntu with EFI, but "hibrid" boot works just fine. Basically there's not a lot of hardware variation in terms of ultrabooks, so everything works more or less correctly. I was personally able to work for 5 hours on the battery (I'm a developer, so you can imagine my workload as slightly higher than average browsing).

I do have some remarks:

- The keyboard is generally good enough, but I've always found HP keyboards to be sloppy compared to ThinkPads, and this is also true for this ultrabook. - The touchpad is ok (synaptics), but the touchpad buttons are crap, like all HP I've ever used. HP doesn't seem to get buttons. When you hear the click it doesn't mean you have clicked. Wake-up HP, I've been using elitebooks since the '90 and this HAS NOT changed! - Not a fan of the "nipple" in the middle of the keyboard, wastes space for the key. - Useless fingerprint scanner, like most HPs.

Both points are moot if you are fine with HPs in general, since this is absolutely equal to any other HP elitebook.

- Some problems with the latest iwlwifi driver (some panics during network scanning in the last weeks), though hardly an HP-only problem.

Comes preloaded with Windows 8, which was easy to zap. Run-time on battery between linux/win8 was equal after for me, contrarily to what other people mention. I used windows 8 for about two weeks (to give it a spin), using Visual Studio, etc. 5 hours of work on battery is the longest I've ever had so far for a laptop. Being able to have a spare battery is a big plus.


If you ever tried to handle any mail server at all, you would recognize that you have choice in using spamhaus (or any other DNSBL).

These people have put in place a high quality method to discriminate spammers. I've been around since their beginnings, and their list has been incredibly successful (very high quality) for me, compared to njabl and other "dynamic" lists based on honeypots, or backlash entirely (say hi to spamcop).

You would also recognize that you can just as well tag the message with "likely spammminess" for use along the chain, and people would still complain that your "legitimate" message was tagged as spam by SOMEBODY, while you wouldn't complain if it was tagged as spam by a learning algorithm.

In short, people would complain anyway, except that spamhaus is doing real damage to the spammers (as in "the mail really didn't go through") and reducing their revenue, and thus forcing them to come out which such measures. Not that they will accomplish anything anyway. Spamhaus helped stop a lot of known/professional spammers, and I applaud them for that.


I do not have any choice at all about what spam filters the recipients of my email may be using. I have never had this problem personally, but there are many, many accounts on webhostingtalk.com of IP ranges being banned Spamhaus without any evidence of spam; of IP addresses that remained banned after ownership changes hands and other problems. There are always two sides in every story. On balance I think Spamhaus is doing a very good and necessary thing. I don't know about this particular case, but I've read accounts of what sound like very reasonable grievances.


I've been designing honeypot/traps as triggers for mail filtering infrastructures for years, and this is very hard to automate process. It started like something that you could watch from time to time in the late '90, maybe slab a DNSBL or two, but right now has become a bloody nightmare. I remember when at some point almost everybody started to "reinvent" greylisting out of convergence, even before it was called as such. Reading nanae (via NNTP) was always a good read.

You constantly have to check if there is a chance that spammers noticed your honeypots so that they can avoid them or use them against you as well (the bigger you get the more sophisticated these attackers get too), you have to use tagged email addresses that can be linked back to the offenders. Methods to probe address ranges multiple times before validating them, and ways to automate the unlisting as well. False positives are basically unavoidable at some point, also because spammers themselves like to rotate their addresses based on their previous owners or known datacenters that are "too big to be blocked" wholesale for this exact reason. If they had a chance to know one of your trigger addresses, a common practice is to generate spam from a "safe" range into the trigger address, in an attempt to generate a false positive and thus, of course, backlash. It's sickening.

Exchanging digests of message contents among multiple server cooperatively became a good indicator of spammyness (vipul's razor), though you would catch bulk emails in the process, and spammers quickly adapted to random email contents so that the method became quickly ineffective.

The real problem here is that these assholes don't care as long as they can deliver the message, that's the only metric they have and care for. Maybe you don't care for it, because you can then use filtering later, but that's a huge volume of trash that needs to be shoveled around. I actually witnessed many cases in organizations bigger than a hundred eployees where several servers were used 24/7 just to churn messages through "dspam" or similar filters before delivering to the final mailbox. This is a huge cost in terms of measurable power wasted for a couple of assholes.


I had a similar problem and never found out exactly why it happened.

The hypothesis I came to was that we weren't using SPF records on the domain associated with our IP address for a long time.

Some spammers were taking advantage of this by sending emails from different IP ranges with the From: header spoofed to be from our domain.

So Spamhaus blocked our IP address on the grounds that spam filters would also be able to confidently block anything appearing to originate from a domain name that resolved to our IP address.


It's extremely unlikely that spoofed headers or the lack of an SPF record would get you listed on an RBL, especially Spamhaus. I can't guess what happened in your case, but somehow your IP address obtained a bad reputation or was unlucky enough to be in a tainted block. FWIW, the very first thing I do after getting an IP allocation is run an RBL check on it and demand a replacement if it's listed anywhere.


Yes, it was a fairly weak hypothesis. OTOH we got on several RBLs a number of times and managed to get taken off them. Once I added an SPF record it hasn't been a problem since.

Didn't use SPF to begin with because there was a large number of hosts legitimately sending mail for the domain and it was a pain to get all of the IP numbers for various crazy reasons.


Are there any digitizer users here?

I use a Wacom digitizer daily for notes and sketches instead of using pen&paper. My wet dream is a pressure&tilt sensitive/e-ink based device, but it looks like the Surface Pro is the closest you can get currently - and this is a good review.

If you want a portable device (laptop/ultrabook/tablet) with a good digitizer that you can actually use (that is, wacom based), your options are actually very few.

There is the Lifebook T902, or the ThinkPad X230T. Did I miss anything else? Both are convertible laptops, both are quite heavy, have medium to poor battery life if you extend them with the additional battery, and a lower-dpi screen. I would have expected higher-range graphics on those laptops, but the integrated HD 4000 is ridicolous when you think you basically get the same on an ultrabook.

Not to mention that the price range is simply off. The Surface Pro is way cheaper.

I used an earlier version of the Lifebook T902. It's actually better than having a separate digitizer which takes useless space on the desk, but it's still cumbersome. You cannot draw unless you flip the screen (odd position otherwise). It's really heavy. A clipboard with paper is an all-around better.

There are two segments of markets that are filled by this usage pattern: on-the-go artists, and cheap cintiq replacements. Drawing on a cintiq is just awesome, but wacom has basically a monopole and the prices are just unjustified. Even the Intuos line is, IMHO, overpriced at least by a 2x factor. The sad reality is that they have absolutely no real competition. I tried several NTrig-based digitizers (lately the Vaio Duo 11), and they just suck. The tracking is just worse, many jumps just over a few hours of testing, not to mention that the pressure sensitivity is lower too (when you're drawing strokes it's quite visible unless the software is not interpolating it for you).

Just look at the missed opportunities there are! The Taichi 21 and VAIO Duo 11 are cool, but they use N-Trig. The keyboard on the Lenovo Yoga is awesome, but no digitizer. The Dell XPS 12 looks stunning, but again missed opportunity. It was rumored to ship with a wacom layer, but it didn't finally.

The only downside of the Surface is the keyboard. I tried the flip keyboard of the Surface Rt and I only hope that the keyboard from the Pro is different, because it sucks. Missed keys, zero feedback. Admittedly, it's better than typing on an on-screen keyboard, but the Taichi 21 of the Dell XPS 12 approaches are way better.

As a sad note, the replaceable battery concept is gone on all these modes. You know, I would settle for lower battery life if I could just have 2, or 3. I was actually shocked that at least HP offers the EliteBook Folio 9470m which has a replaciable battery in a thin format (the ultrabook is awesome), so there are really no excuses for it.


There are a couple of recent Atom based Windows 8 tablets with Wacom. They're much lighter and thinner than the Surface Pro, but it also trades away CPU power for it.

Lenovo Thinkpad Tablet 2. Dell Latitude 10


That eInk-based device you want won't happen until eInk refresh rates improve by a lot - they list an image update time of 120ms for monochrome, and up to 980ms for color. That needs to improve by at least an order of magnitude before it can be considered responsive enough for any direct-manipulation application.


In this area was hoping for the NoteSlate to succeed, but it seems that it went waporware.

For notetaking the refresh speed is not so critical, as the areas to refresh are limited to the writing spot. You would probably notice lag, but if you ever drew with heavyweight painting/retouching programs, lag sometimes is introduced by processing and you just get used to it (you just don't expect immediate results and keep going).

I would still prefer a slight lag and the ability to avoid a glass screen in this case.


>There is the Lifebook T902, or the ThinkPad X230T. I have a X230T (typing on it now) Overall the whole thing is great. I would highly recommend it.


Can you give us some feedback on how the screen hinge flips? Is there a slot for the pen?


Samsung Galaxy Note N8000 tablet uses wacom digitizer.


The N8000 is a different class of device, low spec / android based phone.


Actually N8000 is a 10 inch tablet.


I think that this scenario routinely happens at many companies. Only you don't hear public stories about it, because in the end what happens is the same for all software: continuous transition/refactoring. I know I've been working on many projects like in these conditions, though not of that size.

With more and more experience of that kind and by working on stinking code-bases, I've come to the conclusion that while in the past I could have thought that trashing the code and starting from scratch would help, now I would probably approach most problems by pushing new code in the form that I want and transition the rest as changes are required.

I had projects that I did myself with great design care, but after 5-6 years due to shifting requirements also started to look like you could have done a better job by starting from scratch again. Reality is, in retrospect all code is suboptimal.


Does anyone have some experience with pelican (possibly liquidink) and rest2web?

There are many static website generators, but I'm looking into a python+ReST solution. I've been using rest2web a lot, and I really love it's simplicity compared to the other solutions. rest2web is really straightforward. In the end, it's the python-docutils module that does most of the work anyway, while rest2web simply collects the website structure.

The only downside is that rest2web lacks a bit of polish, and I really wished it would come with the ability to generate rss feeds for a particular tree or tag. I was thinking about writing a plugin, but I'm unsure.

pelican seem to be already be done for the purpose. Actually, pelican seem to target mostly blogs, while I actually just want "a feed of changes" for a particular directory tree. I don't want a blog-turned-into-a-website approach.

Does anybody had this problem? I'm really looking for feedback from people that used rest2web here and moved to pelican/liquidink, or back maybe. Figuring out the limitations of these tools require a long time investment and I cannot really decide by just trying it out on toy pages.


I could say exactly the same for any source-to-C compiler, and/or any system allowing decent FFI with C. Unless there is some different reason than the JVM itself, C has an even larger code base.


In a way, C can be considered an "existing platform" in much the same sense. I didn't include it in my list of examples because I really don't want to have to fight build tooling and cross-platform dependency hell in 2012 if I don't absolutely have to. With e.g. the JVM, you can get a jar from somewhere (or maven it in), and -poof- it just works, everywhere. This is a major advantage to the whole "ecosystem" point.


I wouldn't normally care about the state of GNOME, but as a developer myself I'm in a really sorry state of affairs regarding GTK itself.

At some point with 2.x, GTK stopped being GIMP's toolkit and became part of GNOME. Fortunately it remained more or less self-contained, but it's no longer the case with GTK3.

As an user, I cringe about the usability and responsiveness of GTK3 applications. I really dislike how the built-in dialogs have become. I don't like how some widgets now work. No (easy) theming (as a reversed color theme user) is also a major letdown.

I always considered GTK a nice toolkit from the user's perspective, and up to GTK 1.x it was also considerably faster than QT. GTK2 killed that, and at the same time removed any support for exotic OSes. I had 1-line patches refused under pretty much the same reasons you read in the article.

But as an user I still preferred GTK because of some nice unix-centric features (tear-off menus -- that disappeared at some point, column-based file browsers -- again killed later, user-customizable key bindings on any application -- can you still do that? I don't even care anymore, low memory, fast engines, etc).

But now QT is just superior in any front. QT has native support for OO and nice, consistent, multi-platform API, whereas GTK3 still depends of the shitty glib stack that pretends to be an OO framework (and doing a poor job at it). Ever got random glib warnings by GTK applications on the console? My xsession-errors is full of them. As a developer I just cringe at GTK. It was always bad from day 1, but now it doesn't really make any more sense. Whenever I need to consider a toolkit for a C-only based program (where QT or FLTK is not an option), I usually go for UIP. It's a shame that the looks of these toolkits do not integrate in the rest of the UI.

Right now I actively remove any GTK3 application. Whenever an application gets rebuilt I switch to a QT counterpart, which is usually more responsive and more stable over time. GTK didn't deserve this.


Qt/KDE have their issues as well. Both Qt and KDFE have their own versions of each object (e.g. QAction vs KAction). This means you have two choices when writing an application in Qt -- Qt only or KDE-based.

The Qt stack does not have support for mimetype handling for files (e.g. I want to open any text/html file). You need KDE for that.

Qt/KDE requires you to generate moc files (preprocessing the C++ code) which can make it harder to maintain if not using CMake or the Qt project format.

Qt broke the behaviour of QAudioDeviceInfo by moving it to a different package (QtMultimedia vs QtMultimediaKit). There have been times where Qt has not shipped a pc package (esp. in the package QAudioDevice moved to) which makes it difficult to add that package on build systems other than the Qt build system.

Qt does not support using gettext for translation.

Qt is in a transition from QtWidgets to QtQuick which (like Gtk3 themes) is currently a moving target.


I'm pretty sure there's a gconf/dconf key somewhere to enable the customisable keys.

You can see how customisable keys could confuse a normal user if they did it by accident though.

At least improvements are happening to gtk (if slowly for lack of manpower) .. for instance gtk3 makes drawing a lot simpler (a lot more based around cairo) - changes like this are important as it should be quicker to make apps.

Toolkits etc do need to refreshed from time to time or they become unmanageable, 10 years or so was pretty good for gtk2, gtk3 is only just getting started.


UIP?


I wish I could see this kind of discussion in the wayland/name-your-compositor mailing lists. Currently it would be easy to implement a new attribute in the ICCCM so that applications that don't need foreground unredirection (ie: they actively want to be transparent, which is a minority) can signal so, but it looks like that with the unification of the window manager and compositor we are losing this kind of extendability.

I do realtime graphics with GL for a living, and I'm really sad at the state of the linux desktop, but especially worried at the future. GL performance with any modern distribution is worrysome due to the compositors, and removing it is the first thing that I need to tell any customer lamenting performance issues. Being unable to set vsync on a per-window/context basis is really a major problem, and the performance hit that you get is unacceptable. It seems that removing vsync is all hip these days for games and toys, yet tearing is unacceptable in any other context when you can actually get triple buffering with modern hardware.

Why are we optimizing for toys, when what matters are actual applications and everyday's performance? People don't seem to realize that while the GPU can be used as a general purpose computing unit, the usage pattern is vastly different. The process handling the screen often must have either higher or total control over it many times. You immediately notice video stutter and frames lagging.

But what do I know... I do the same with audio trying to squeeze decent latency with PortAudio and/or alsa directly, while people go through the pulseaudio pipeline...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: