Hacker Newsnew | past | comments | ask | show | jobs | submit | manthideaal's commentslogin

In the source code to perform el.classList = v.classes the author uses:

  for (const name of v.classes)
    if (!el.classList.contains(name)) el.classList.add(name)
  for (const name of el.classList) 
    if (!v.classes.includes(name)) el.classList.remove(name)
Why the standard decided to make classList read-only?

Could ele.className=(v.classes).join(" ") be a valid and performant solution?, perhaps is to avoid the string to token traslation for performance reasons?, then why don't they include a classList.set method?


> Why is classList read-only, I can think of any valid reason.

Because it's a live DOMTokenList, not a computed property. That is if you keep a reference to a classList and mutate the className parameter, classList will reflect the changes.

classList could also be a computed property which, when assigned to, clears the underlying token list and adds all elements but I'm guessing the developers of the API saw this as unnecessary complexity given you can do the same thing by setting className.

The point of classList is the ability to more easily check for and toggle individual classes, if you don't need that capability you just don't use classList.

> Perhaps ele.className=(v.classes).join(" ") is a valid solution?

Yes.


Great answer, thanks! Embarrasingly, I didn't even know about className :-)


I’m not sure this is the right place, but... I see a potential subtle bug there too: it doesn’t seem at first glance that the classes get added in the same order as they’re specified in the virtual element? If I made div.bar and then changed it to div.foo.bar, properties of .foo could override properties of .bar because it looks like it would end up with classList [“bar”, “foo”] instead of [“foo”, “bar”]? Maybe?

Edit: please don’t take this as criticism. I love the project and think it’s awesome! That’s why I took the time to read through the source and grok it!


> If I made div.bar and then changed it to div.foo.bar, properties of .foo could override properties of .bar because it looks like it would end up with classList [“bar”, “foo”] instead of [“foo”, “bar”]? Maybe?

What properties are you talking about here? Despite the name, classes are really an unordered set, the order of classes on the element should not matter: when CSS gets applied, properties are prioritised based on the most specific rule, then the latest rule. That is the prioritisation of CSS properties should depend entirely on the CSS and not in any way on the order of the class attribute / className property.


So let’s say .foo sets “padding: 8px;” and .bar sets “padding: 4px;”. It’s been a while, but I’m pretty sure that <div class=“foo bar”> ends up with 4px padding, and <div class=“bar foo”> ends up with 8px padding. I could be wrong though and will test it out when I’m back in front of a real computer.

Edit: Woah... It looks like I am incorrect. Nifty! I'm amazed I have never been burned by that before!

Edit 2: I suspect the reason I've never been burned by that before is because overrides like that have generally been applied over top of some kind of generic CSS file and the generic one was loaded first; since whichever rule is declared later is considered to have precedence, the override file wins over the generic file. My mind is kind of blown right now that over 20 years of web development I have never encountered this problem!


Agreed, let me know if you find a better way! I feel if the browser APIs were slightly more declaratively inclined, this article could be "10 line React".


to merge the properties of two objects you could use

  let merged = {...obj1, ...obj2};  See (1) which has 2808 votes.
(1) https://stackoverflow.com/questions/171251/how-can-i-merge-p...


I wonder if a two step process could work better than this, first use a variational autoencoder or simple an autoencoder then use it to train the labeled sampled.

In (1) there is a full example of using the two step strategy but using more labeled data to obtain 92% of accuracy. Someone can try changing the second part to use only ten labels for the classifying part and share results?

(1) https://www.datacamp.com/community/tutorials/autoencoder-cla...

Edited: I found a deep analysis in (2), in short for CIFAR 10 the VAE semi-supervised learning approach provides poor results, but the author has not used augmentation!

(2) http://bjlkeng.github.io/posts/semi-supervised-learning-with...


Yeah, authors have tried mixing the strategy you described(self-supervised learning) for semi-supervised tasks.

Basic idea is to learn generic image representations without manual labeling and then finetune that on your small dataset. These are relevant articles I have wrote on it: https://amitness.com/2020/02/illustrated-self-supervised-lea...

https://amitness.com/2020/03/illustrated-simclr/


To understand the terms: webrtc, stun, turn, mesh, sfu, mcu, ice and trickle ice, there is (1). 15 minutes to understand what is all this about. What about IPv6 stun and turn?, it seems other people asked the same idea I thought: (2) bout all of this, one is the answer is: As IPv6 takes over the complexity of new networks, STUN and ICE will become irrelevant. I think that with the surge in video conferences and rtc, ipv6 with take off.

In my very humble opinion, I would suggest to reserve some address space in IPv6 for rtc, so that a peer is able to adopt a new special ip reserved for rtc. Nothing new under the sun, in 2014 someone comment along this line of thought (2) and (3).

So what are we waiting for?

(1) https://webrtcglossary.com/ (2) https://www.quora.com/Will-the-IPv6-result-in-the-death-of-S...

(3) 2014, AshleysBrain, https://news.ycombinator.com/item?id=7496986 I think the solution is IPv6. Once every device on the Internet is uniquely addressable again, we can do away with these NAT hacks and two endpoints should be able to reliably connect to each other again, no matter where they are. Of course, that's assuming we don't get more short-sighted engineering that breaks things again...


addressable !== routeable !== reachable.

IPv6 will certainly REDUCE the need for STUN, but there are still (many) cases where you don't want to be "reachable by default", in which case you need a stable reference for negotiating routing and reachability (e.g. STUN).


IPv6 could have and address for "reachable by default", I find that very useful, also IPv6 allow many addresses so that is not wasteful.


Yeah, but if it’s reachable by default then it’s (by definition) open to the world. Otherwise (if you mean routable by default) you will still end up temporarily punching holes in your firewall, which you will need to close afterwards, and possibly recycling your ip so you aren’t still routable on that last used address. sounds like you would personally end up being a STUN server!


[flagged]


In order to send a datagram to multiple IP the first, and naive idea, one can think of is to change the datagram protocol to allow for multiple destination. Today, 2020, is the right time. I am thinking about platforms that have hundred or thousand of simultaneous receiving ends, so that the branching point occurs near the destination. Again, googling this proposal is not new (1) RFC 1770, category informational.

Edited: It seem that RtcDataChannel can be used with SFU, example LiveSwitch in 2018, but they don't use multiple destination datagrams (3)

More on similar proposal (2).

(1) IPv4 Option for Sender Directed Multi-Destination Delivery. The Selective Directed Broadcast Mode (SDBM) is an integral part of the U.S. Army standard for tactical data communication networks as defined in MIL-STD-188-220().

(2) https://www.researchgate.net/publication/238663190_IPv4_Opti...

(3) https://www.frozenmountain.com/developers/blog/archive/indus...


[flagged]


Each receiver sending feedback does not prevent the server for using datagrams with multiple destinations. I can see that each peer use a different resolution and bitrate but that is another layer, is like sending information at several resolutions and each peer selecting the best one.

Edited: I must learn something about multicast in IPv6, the idea seems interesting.


> is like sending information at several resolutions and each peer selecting the best one.

Well, no. This is not about sending all video layers to all receivers and let them choose which one to render. Not al all.

The purpose of video simulcast/SVC is the opposite: make the SFU decide (based on estimated per receiver bandwidth or whatever) which video layers to deliver to each receiver, so a HQ video of 8 mbps does not break your Internet downlink (you just receive the lowest video layer which is 1 mbps, for example).

And more important: no, the server can not send the same UDP datagram (the same RTP packet) to all receivers. The server needs a different RTP sequence number count and a different SRTP encryption keys with each receiver.

I'm afraid this is not so easy as you say, not at all.

BTW: Do you want to say something about mediasoup? or just about your stuff?


Thanks for all the info. I think that mediasoup is a very good SFU. I wish you the best and I hope mediasoup SFU to become a crucial tool for rtc.

Edited: In a recent article (1) it seems that multicast is better than SFU in webrtc. In the PhD. Thesis (2) a hybrid model is used: Hybrid multicast-unicast video streaming over heterogenous cellular networks.

(1)https://ieeexplore.ieee.org/document/8811590 (2) https://summit.sfu.ca/item/16802


Another link with pdf with lectures, exam and assigment and a link to video of lectures in (1)

(1) https://www.davidsilver.uk/teaching/


it seems that neural networks have problem for computing the maximum function, and a human can compute the maximum easily, so it seems that the three heuristic rules don't work in this case.

(1) https://datascience.stackexchange.com/questions/56676/can-ma...


You're referring to about whether a generic one-size-fits-all model will do well, but ML is full of bespoke models. It would be simple to build a neural network that can compute (and differentiate through) the max function to within some arbitrary epsilon, even though the most generic model (feed forward network) won't do great.


> l (feed forward network) won't do great.

See my answer below, in the case of this problem a generic feed-forward network, even a simple one, will work.

Not any ffn, but assuming you are using an efficient architecture search it will probably find one that works.

There's other numerical problems where this doesn't hold but that's another story.


You demonstrated it for a reeeeeeeallly constrained version of the problem. Do you expect your solution would generalize to many lists? Because it would be easy to make a neural network that does, while your toy example (and larger generalizations) probably won't generalize super well.

x_i = ith list element from list x

y = sum(x_i * softmax(k * x)_i)

This one parameter, arbitrarily wide network one will get arbitrarily close to the max function.

This is a super toy version of why attention is so effective. It can pick stuff.


That is untrue,

Here's a code example (actually took me ~20 minutes to get it "right" so I'll admit it's not the most trivial problem)... it includes seeds so that you can replicate locally (it should hit 100% accuracy all the time on the 1200 examples testing set reliably by about epoch 700):

'''

import torch import random from sklearn.metrics import accuracy_score

random.seed(61) torch.manual_seed(61)

X = [[random.random() for x in range(2)] for x in range(2000)] X_train = torch.FloatTensor(X[0:800]).cuda() X_test = torch.FloatTensor(X[800:]).cuda() X = torch.FloatTensor(X)

Y = [] for x in X: y = [0] * len(x) y[torch.argmax(x)] = 1 Y.append(y)

Y_train = torch.FloatTensor(Y[0:800]).cuda() Y_test = torch.FloatTensor(Y[800:]).cuda()

shape = [2,2] layers = [] for ind in range(len(shape) - 1): layers.append(torch.nn.Linear(shape[ind],shape[ind+1],bias=False))

net = torch.nn.Sequential(layers).cuda()

optim = torch.optim.SGD(net.parameters(), lr=1) criterion = torch.torch.nn.CrossEntropyLoss()

dataset = torch.utils.data.TensorDataset(X_train, Y_train) dataloader = torch.utils.data.DataLoader(dataset, shuffle=True, batch_size=10)

for i in range(pow(10,6)): for X,Y in dataloader: Yp = net(X) loss = criterion(Yp, Y.max(1).indices) loss.backward() optim.step() optim.zero_grad()

    if i > 500:
        optim = torch.optim.SGD(net.parameters(), lr=0.002)

    if i % 20 == 0:
        Yp = net(X_test)
        print('Training loss: ', loss.item())
        print(f'\nAccuracy score for epoch {i}:')
        print(accuracy_score(Yp.max(1).indices.tolist(),Y_test.max(1).indices.tolist()))
'''

This is as basic as you can get, predict the max out of 2 numbers, only uses a total of 4 node:

2 inputs (the 2 number) -> 2 outputs (the index of the maximum numbers). just 2 weight being optimize, no biases no nothing, as simple an implementation as you can get in terms of size.

There's also way to do it (apparently) where instead of treating it as "find the max index" you treat it as "output the maximum number": https://www.quora.com/Can-deep-neural-networks-learn-the-min...

But the approach I have will generalize to e.g. "Find the max of 5 or 100 or 1000 numbers" (although I assume it might take some time)

And overall you have no guarantee, that's why I qualified the statement and didn't say "Literally any imaginable problem that a human can solve without context".

To some extent it also matter how you encoder your number, you can train a 10000000000 parameter FCNN with RELU activations until the end of time to learn a simple mutliplication, and it won't be able to do so if you don't log encode your numbers or use some encoding or activation that means `` can be transposed in the `+` operations being done inside the nodes to combine the outputs... because that's outside of the scope of mathematics that given netwrok can do.

But, unless you are specifically trying to come up with an edge case and are instead looking at real world problems and trying to design the network in such a way as to best handle them (and this doesn't have to be all manual, you can use various NAS techniques), the rule will hold most of the time I believe.


You are right. Also the Universal Approximation theorem (1) for neural networks guarantees that neural networks can approximate continuous function on compact subsets of R^n, in this case max(x,y).

argmax([x,y]) = (sign(x[0]-x[1])+1)/2

Going beyond continuous functions, can deep learning be used for primality test?

(1) https://en.wikipedia.org/wiki/Universal_approximation_theore...


https://escholarship.org/content/qt5sg7n4ww/qt5sg7n4ww.pdf

> A long-standing difficulty for connectionism has been to implement compositionality, the idea of building a knowledge representation out of components such that the meaning arises from the meanings of the individual components and how they are combined. Here we show how a neural-learning algorithm, knowledge-based cascade-correlation (KBCC), creates a compositional representation of the prime-number concept and uses this representation to decide whether its input n is a prime number or not. KBCC conformed to a basic prime-number testing algorithm by recruiting source networks representing division by prime numbers in order from smallest to largest prime divisor up to √n. KBCC learned how to test prime numbers faster and generalized better to untrained numbers than did similar knowledge-free neural learners. The results demonstrate that neural networks can learn to perform in a compositional manner and underscore the importance of basing learning on existing knowledge.

But again, I think things such as prime number tests are the exact kind of edge cases where one needs too many heuristics built into the model for it to be practical to use.

But I think something like a prime test is not included under the definition I gave anyway, because the idea of "prime" actually implies a lot of context.

You can take a baby and he will be able to classify images, you can take a human that speaks a language with no concept of numbers and he will be able to play or sing music and distinguish patterns in it.

You can't talk about "prime" without a mathematical apparatus that takes years for humans to understand. However, since we learn it as such an early age, it ends up in the background.

Granted, that could be said about almost any cognitive ability (the fact that there's a lot of "subconscious context" required to use it).... so I don't know.


IANAL, but if the purpose of taking screenshot every 30 minutes is to control the work of the employee you must know that in the EU you have the right to be informed about any measure taken to control you.

If you can convince the judge that taking the screenshot has other purpose then GDPR doesn't apply.

From (2): The WP29 outlines that a DPIA is likely to be required if «a company systematically monitor(s) its employees’ activities, including the monitoring of the employees’ work station, internet activity» since it implies a «systematic monitoring and data concerning vulnerable data subjects» (23), form GDPR and Personal Data Protection in the Employment Context CLAUDIA OGRISEG

In (1) at point 8: the employeer has to inform the employee about: (i) whether and when monitoring is applied. (ii) the purpose of data processing, (iii) the means used for data processing.

https://legalict.com/factsheets/privacy-monitoring-work-gdpr...

Point 2) What king of personal data does an employer process, includes: Remote management of all mobile devices, such as phones and laptops;


Using so many microservices seems like a good use of erlang, googling I found: (1)

(1) http://blog.plataformatec.com.br/2019/10/kubernetes-and-the-...


A more recent version of the last link in the Statistics and machine learning section is (1)

(1) https://github.com/percyliang/cs229t/blob/master/lectures/no...



Some more info from (1): The new Netwalker phishing campaign is using an attachment named "CORONAVIRUS_COVID-19.vbs". Related: Also from (2): APT36 uses two lure formats in this campaign: Excel documents with embedded malicious macros and RTF documents files designed to exploit the CVE-2017-0199 Microsoft Office/WordPad remote code execution vulnerability.

from (3) .Ransomware Gangs to Stop Attacking Health Orgs During Pandemic

(1) https://www.bleepingcomputer.com/news/security/netwalker-ran...

(2) https://www.bleepingcomputer.com/news/security/nation-backed...

(3) https://www.bleepingcomputer.com/news/security/ransomware-ga...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: