|
|
|
|
|
|
|
|
| |
Or maybe we have never needed an exclusive economic monopoly on a creative work to encourage the creation of art? Maybe we would all be in a better world were art and culture lived in the collective commons, free for anyone in the zeitgeist to adapt and proliferate? Can we really say commercial production of culture has been truly the best for society? |
|
|
| |
Not wanting to too negative, but this is just a rich person affection. You don't see these in low income neighborhoods for a reason. It can only exist where there isn't a real need, because if this was meant to fulfill a need, it would be wholly inadequate. It's like all those ventilator projects people were making during COVID. Feel good busy work, most of it went straight into the trash. |
|
| |
1) People who don’t derive pleasure from reading do generally see it as an affection, because they can’t understand the appeal outside of apparent status-signaling. Reading, and discovering new material to read, is in fact an intensely selfish pleasure, but it is not a natural pleasure; if you didn’t get hooked in childhood, it’s probably not possible to convince you, and so you will forever see someone offering you a free book as some kind of hostile, elitist signal. 2) There are a lot of reasons why LFLs may not always thrive in low-income neighborhoods (although there are quite a few in mine, and it’s no Beverly Hills), but I would suggest that it is not because the residents exist in some higher state of authenticity. |
|
| |
Ha, nice try on throwing shade. I read voraciously, and have done so since I could pick up my first book. I don't think there's any more or less dignity in being poor, but neither am I impressed with these weird things wealthy people do to appear socially responsible. It's not solidarity, it's not really mutual aid, it's just performative. It's charity, but only a tiny amount, in only the places where it's not needed. I guarantee you, if someone from my neighborhood walked through a neighbourhood with these things and tried to pick up a book, you'd have Aurora PD with a boot on their neck before they made it back home. Honestly, ultimately it's not that I think these are bad, but the attention to them is just gauche. It's weird to draw attention to this thing that I've literally only seen privileged people put up infront of their homes. |
|
| |
Ah, the lovely downvotes of the fragile people who can't stand someone not liking their thing |
|
|
| |
op here. Important point, but I disagree. We see explainability/interpretability as a CORE need for AI safety. We believe you can't align/audit/debug/fix a system that you don't understand. Just to give you some answers for what we can do: 1) We can find the training data that is causing a model to output toxic/unwanted text and correct it.
2) We know what high level concepts the model is relying on for any group of tokens it generates, hence, reducing that generation is as simple as toggling the effect of the output on that concept. Most of the AI safety techniques fall under finetuning. Our model allows your to do this without fine-tuning. You can toggle the presence of . For example, wouldn't you like to know why a model is being sycophantic? Or Sandbagging? Is it a particular kind of training data that is causing this? Or is it some high level part of the model's representations? For any of this, our model can tell you exactly why the model generated that output. Over the coming weeks, we'll show exactly how you can do this! |
|
| |
This is fantastic to read. LLMs feel like black boxes and for the large ones especially I have a sense they genuinely form concepts. Yet the internals were opaque. I remember reading how LLMs cannot explain their own behaviour when asked. I feel this would give insight into all that including the degree of true conceptualisation. I’m curious if this can demonstrate what else the model is aware of when answering, too. |
|
| |
Our decomposition allows us to answer question like: for 84 percent of the model's representation, we know it is relying on this concept to give an answer. We can also trace its behavior to the training data that led to it, so that can show us where some of these concepts are formed from. |
|
| |
> wouldn't you like to know why a model is being sycophantic? Or Sandbagging? Actually, emphatically no. The only thing I care about is that I have recourse. It shouldn't matter the reason, in fact explainability can be an impediment to accountability. It's just another plausible barrier to a remedy that a bureaucracy can use deny changing a decision. |
|
|
| |
I work on ML problems in the healthcare/life sciences area, anything that enhances explainability is helpful. To a regulator, it's not really good enough to point at a black box and say you don't know why it gave the wrong answer this time. They have an odd acceptance of human error, but very little for technological uncertainty. |
|
|
|
|
|
|
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact |