The concept is the same, but the execution is different. As a rule of thumb, Focusmate may work well for light, occasional procrastinators, while WorkMode is tailored to the needs of lifelong procrastinators.
I'm going to mention some disadvantages of using Focusmate, but please note that I'm not criticizing their platform. If Focusmate works for you - great, continue using it. However, if you find yourself scrolling through Hacker News instead of working, you might want to consider using WorkMode.
So, here are some differences between WorkMode and Focusmate:
On Focusmate, you're connected to another procrastinator who is there for their own benefit and usually doesn't care about your productivity. On WorkMode, it's different—you work with our employee whose only job is to ensure that you're working on your tasks.
On WorkMode, there's no such thing as a "no-show." I'm a hedonistic procrastinator, and even one missed session can result in skipping work for multiple days. This can be catastrophic if you charge by the hour (e.g., many freelancers and consultants). When I started body doubling, my income rose by 2-3x, and my disposable income increased even more.
We call you every day 30 minutes before your scheduled session to remind you about the meeting. We can also email or text you, but in our experience, these methods are less effective. It's sometimes convenient to "miss" an email or a text, but it's not easy to ignore a phone call from your Productivity Partner.
On WorkMode, you always work with the same Productivity Partner. People with social anxiety appreciate this consistency. It also helps build a relationship with them. They get to know you, understand when, why, and how you procrastinate, recognize potential triggers, and help you avoid them.
On Focusmate, you're responsible for showing up and scheduling your sessions. For many, this becomes just another to-do item on their list—one they may fail to complete because, well, they procrastinate. On WorkMode, you just need to connect with us the first time, and we'll take that responsibility from you. We'll ensure that another session is scheduled, remind you about it, and call you before it starts.
Screen sharing is much safer on WorkMode. Just make sure you don't break any NDAs.
The bigger question is if the company who legally owns the right to the code realizes it's their IP, and then, of course, if they care. Not every game publisher has a Nintendo-level concern for their archaic titles, and some publishing houses may own hundreds of franchises technically they have zero intent to ever touch again and may not even have bothered to keep track are theirs.
Geez, these companies just keep getting passed around and around. I was working for Time Warner Interactive (aka Tengen, the consumer arm of Atari Games) when Midway bought them in '96.
It is a cool trick and even without virtual memory if the maximum read size has a low upper bound just copying a the first few bytes from the beginning to the end can be worth it to avoid the complexity and overhead of indexing into the ring buffer modulo its size.
This clever ring buffer has hidden costs that don't show up in microbenchmarks testing only the push() and pop() operations on an initialized ring buffer. These can be very high and impact other software as well.
The initialisation requires multiple system calls to setup the aliased memory mapping and it has to be a multiple of the page size. This can be very expensive on big systems e.g. shooting down the TLBs on all other CPUs with interprocessor interrupts to invalidate any potentially conflicting mappings. On other end small systems often using variations of virtual indexed data caches and may be forced to set the aliased ring buffer pages as uncacheable unless you can get the cache coloring correct. And even for very small ring buffers you also use at least two TLB entries. A "dumber" ring buffer implementation can share part of a large page with other data massively reducing the TLB pressure.
Abusing the MMU to implement an otherwise missing ring addressing mode is tempting and can be worth it for some use-cases on several common platforms, but be careful and measure the impact on the whole system.
It sprang from old mainframe implementations - there's a 2002 (??) Virtual Ring Buffer in Phil Howard's LibH collection (2007 snapshots of 2002 copyright code based on circa 1980 implementations IIRC):
VRB:- These functions implement a basic virtual ring buffer system which allows the caller to put data in, and take data out, of a ring buffer, and always access the data directly in the buffer contiguously, without the caller or the functions doing any data copying to accomplish it.
If anyone is considering to use this, I think the code has a couple of concurrency bugs but I do not know if this was ever intended to be used in a multithreaded setting. vrb_get() contains the following code.
//-- Limit request to available data.
if ( arg_size > vrb_data_len( arg_vrb ) ) {
arg_size = vrb_data_len( arg_vrb );
}
//-- If nothing to get, then just return now.
if ( arg_size == 0 ) return 0;
//-- Copy data to caller space.
memcpy( arg_data, arg_vrb->first_ptr, arg_size );
If at the time of the check for the amount of available data there is less data available than requested but additional data gets added to the buffer before arg_size gets updated, then this might get more data than requested and overflow the target buffer. At least vrb_read() and vrb_write() have the same bug.
Sadly I can't ask the author anymore, I suspect this was a least feature possible stripped down port of a more battle tested cross platform library that dated to before the times of modern multi multi multi core chips.
Concurrency issues existed in the days of yore .. but they arose in different ways with different timings.
It's an odd bit of code archeology recalling a concept ( mmap ring buffers ) from decades past and then hunting to find the best remaining example - much of LibH was 'clean' rewrites of code from the authors past work.
Are the memory semantics programmer-friendly on all (or most) platforms when using aliasing mappings like this? Eg if two CPUs concurrnetly update the memory through different VM address aliases, is the observed semantics identical vs accessing through only one address or not?
I think POSIX requires it to work. On all sane architectures that use physically indexed caches (or VIPT) that's easy. On other architectures the OS has to bend over backward to preserve the fiction.
VIPT caches only allow this if all virtual aliases for the physical memory backing them map to the same cache set. Let's look at an example: Writing to the first cache line in both aliases of such a double mapped ring buffer. The data cache indexes the cache by the virtual addresses while the MMU translates them. If the virtual addresses have the correct cache coloring to always hit the same set of cache to compare the physical address to the tag it works, but if virtual addresses don't index to the same set of cache lines you get data corruption because there will be two dirty cache lines with conflicting data for the same physical address waiting to be written back in an unknowable order. Good luck debugging this should your system allow you to create such a memory mapping. At least Linux and FreeBSD used to allow you to setup this time bomb on ARMv5 and you have only two bad options to choose from as kernel: break software relying on this or make it unbearable slow by making aliased memory uncacheable.
Nobody (not even IBM AFAIK) uses coloring anymore so this is moot. ARMv5 isn't even supported by Linux anymore.
All modern caches PIPT, VIPT, or even VIVT must work with this scheme as it's semantically transparent to virtual memory. The performance of line-crossing accesses is a totally different issue and I would never used this "trick".
Incidentally the non-portable remap_file_page(2) was great for doing this sort of page manipulations, but it has been deprecated and this days it is equivalent of doing the separate mmaps.
I guess that would be the case if the battery was 100% lithium. But they are not (and cannot be) If I remember correctly the issue with increasing density in lithium batteries is their propensity to catch fire + explode (due to dendrites forming between lithium plates)
So a better battery tech might have a material that can store less electrons per m3 within the "active" material - but it can contain 30% more.
Low language complexity - not much more than EDN and everything is an expression, plus a few extras like destructuring. Certainly simpler than any other language I have used.
Extensibility - again, one of Clojure's strong points with macros.
As for "Type systems removing the value of a repl" again, I disagree. Repl driven development is as much about exploring the problem than it is writing code.
Stability in this context means "can make changes with confidence in the absence of tests". The more confident you can be, the more stable. Maybe not the right word though.
> Repl driven development is as much about exploring the problem than it is writing code.
Which is exactly what types give you. This even coined the term "type driven development" (same as "test driven development" on purpose").
> Stability in this context means "can make changes with confidence in the absence of tests". The more confident you can be, the more stable. Maybe not the right word though.
> In my experience, 9 times out of 10 delegating sections of content with a final edit and sign off by a designated individual would work very well - no real time collaboration needed.
So that's actually how we do our internal weekly presentation. Each team has a section, and we all work on it. (Albeit in realtime)
> no one is really sure when a slide is finished
You can assign slides in pitch and update their status. So everyone knows who is responsible for something and what state it is in.
Even outside of the IT sector, I think the majority of the communication in Berlin is done in English. But in IT, English is definitely the lingua franca worldwide.
If you’re experiencing an issue please get in touch with us through the messenger or via email to support@pitch.com and let us know about that bug. Helpful information to include:
What steps did you take before you experienced the issue?
What happened, and what did you expect to happen instead?
If there is an error message, what does it say?
If you’re using our web app, which browser and which version are you using?
If you’re using our desktop app, what is your operating system?
Screenshots and screen recordings are helpful for us to understand an issue.