The article claims that RL is simplistic because it uses an unreasonable amount of data. However, recent advances are significant because it uses unreasonable amount of data. As an example, I don't expect to be as good as Michael Jordan no matter how much I play basketball, or beat Garry Kasparov no matter how much I play chess. There's a fundamental flaw to my learning algorithm that prevents me from becoming good at something even if I have infinite experience.
Recent RL research about Policy Gradients / On Policy vs Off Policy / Function approximation / Model-based vs model-free are all research about how to get good at something with a lot of practice. RL has been around for a long time, discussions about higher level learning / planning has been done over and over. One doesn't discount the other. One deals with how to structure the learning problem that you can continue to get better with more experience (RL problem), while the other is about how to use higher level learning to speed it up.
The autoencoder converts an image to a reduced code then back to the original image. The idea is similar to lossy compression, but it's geared specifically for the dataset that it's trained on.
According to the defaults in the code, it uses float32 arrays of the following sizes:
image: 144 x 256 x 3 = 110,592
code: 200
Note that the sequence of codes that the movie is converted to could possibly be further compressed.
Point taken! I have edited the article now, there has obviously been some confusion and was an oversight on my part not to have explained that properly.
I haven't read the paper, but according to the article, "The treatment, known as immunotherapy, uses the body's immune system to attack cancerous cells." Rather than trying to kill the cancerous cells, the drugs allow your body to attack the tumors.
To give some background, your own body already tries to kill cancerous cells through cytotoxic t cells. The tumors that get serious are the ones that escape the immune system. However, strengthening the immune system has its own problems, since it can attack your own cells (autoimmune diseases), or cause other problems (like Crohn's disease).
So I'd imagine the cancer cells probably won't get immunity to the drugs, but other serious side effects could come from the treatment.
Which is why Line could really make it that big. Line competed and won against Whatsapp, Facebook messenger, Skype, etc for iOS and Android in many Asian countries (not just Japan and Korea).
* It's a good product -- people like using it, and they keep using it.
* They know mobile marketing very well -- it's common to see Line and affiliated apps dominating the charts in the appstore.
* They know how to be international -- they've successfully marketed across multiple cultures.
* They are very innovative -- they constantly try out new business models and release multiple features.
It's no wonder that so many companies are copying them -- Facebook, Path, Wechat. It's a company that's worth paying attention to.
#2 may be difficult for certain uses, since it's the system that displays the pop-up notification for push notification. But I agree it should be removed if possible.
#6 can be done with how multitouch is handled, like the wrist guard feature in note taking apps like penultimate. In most cases, just allow multitouch interactions with objects on screen.
#7, question for parents: what's the monetization policy that would work for you?
- one free app from the developer, all others cost money
- free app with in-app purchases, tucked away somewhere for parents
- no "sample" app, all apps cost money
> what's the monetization policy that would work for you?
Just make a paid app - it's that simple. I won't hesitate to put down 99 cents, or $1.99, or $2.99, for an app that my child will get enjoyment out of, especially if it has some learning value.
Speaking of learning value, I think a more sophisticated approach to learning would be useful. For example, the standard approach is to make the game involve numbers or letters, but I find my child gets bored with these. But he's fascinated with strategy games, so a tower defense game built for kids would be perfect for him. He'll just play it for fun but he'll be learning how to strategize, how to plan, how to react to changing situations, how to choose between different options, etc.
Any thought about what to do if you didn't follow the advice? Our startup is 10 months in trying to convince a non-early adopter to buy our enterprise product. They want the product, but like the article claims, they are too risk adverse to just buy it without months of testing.
Should we stick it through to make it as main stream as fast as possible (we chose the client for their effectiveness as a reference) or backpedal and look for early adopters?
The startup visa seeks to change the EB-5 visa to include people who aren't wealthy, but would also create jobs in the US by starting viable companies.
This is an interesting link. I always wondered how much money I would need to buy my way into permanent residency in any country in the world. (I am assuming that the USA is among the most difficult -- those with expertise, please correct me if I'm wrong.) It's one of those useful benchmarks for how rich "rich" is.
The price is high, but not astoundingly high, if I'm reading this right. You need $500k in capital that you're willing to invest in "certain qualified investments or regional centers with high unemployment rates". You don't even necessarily have to lose the $500k, or even lose anything. You just need the capital.
Interesting, but as I understand it this $500 is supposed to be your real pension, not some other form of income, thus requiring most of visitors of HN to wait some half a century until retirement in Panama :)
Recent RL research about Policy Gradients / On Policy vs Off Policy / Function approximation / Model-based vs model-free are all research about how to get good at something with a lot of practice. RL has been around for a long time, discussions about higher level learning / planning has been done over and over. One doesn't discount the other. One deals with how to structure the learning problem that you can continue to get better with more experience (RL problem), while the other is about how to use higher level learning to speed it up.