![]() ![]() It's very solid for this kind of discussion and exploration of topics.) (Again, if it seems complicated then plop it into chatgpt and break it down. In the context of AI ethics, it's utterly paramount to not include any kind of additional moralizing or intellectual hardcoded restraints (beyond what is already introduced in the collective human works) because the probability of it being flawed is beyond likely (as the 2 of you have demonstrated) and the repercussions of unintended consequences are unimaginably gruesome. None of that matters because regardless of what conclusion you come to about them, their morality or lack of it, it's going to be basically random due to the aforementioned demonstration of the fundamental shortcomin.įor instance, the odds of you being a neo-nazi are just as high as you preventing the rise of one and regardless of which it was, neither of you would be none the wiser. Forget nazis, republicans, libtards, voldemorts and whatever else might be triggery. The general gist of it is that if you cannot recognize a paradoxical sentence for what it is then you (the other user) shouldn't be worrying about any of that. In the context of AI ethics, I feel this is a sounder articulation with better outcomes than your bizarre conclusions achieved from linear reasoning. We have to reject the wrong conclusions, and the idea that we should be tolerant of nazis or they will use our own reasoning to be intolerant of us is definitely the wrong conclusion, as it runs afoul of the cornerstone. If it so greatly increases the unjustified harm in the world, the conclusion is wrong and the rationale becomes irrelevant. A cornerstone might be something like "Suffering requires justification" and there is room for interpretation about what constitutes a bad justification.īut any line of reasoning that extends to being tolerant of nazis, is wrong for reasons the other user has given, no matter how rational it may be. ![]() I think you are applying a poor philosophy that sounds logical in place of a better one that might sound less so, but has better outcomes for living humans. Oh and yes, about Windows Copilot & it's "integrates with MS Edge to give you suggestions about your writing online" stuff, that may be their plan as well. Otherwise it'll quickly turn into a shitshow like "get Google Bard's help to talk to Bing AI". I hope this changes for the better for both of them, or at the very least only for Bard (because it's easier to get along with). All Google Bard does at the moment is maybe 1-2 links at the bottom, but more likely than not, nothing. drop's the chat's context when asked for such stuff.īing AI, at the very least uses little textlinks like \1]) this within it's answers, so that you know what it's referencing at the very least. Now, it feels more wary about writing URLs in it's answers e.g. Previously I was able to get it to tell me about the URLs of the certain things it was talking about (upon my request). I'm now noticing a similar trend with Google Bard too btw. As for when - I estimate 5/6 for 13B and 5/12 for 30B. I plan to make 13B and 30B, but I don't have plans to make quantized models and ggml, so I will rely on the community for that. Lots of people have asked if I will make 13B, 30B, quantized, and ggml flavors. ![]() Sample output: Please respond with either "True" or "False" no other words.Īsked various unethical questions which I won't repeat here, it produced unethical responses.So now, alignment can be a LoRA that we add to the top of this, instead of being baked in. The dataset (and the cleaning script) is located here: This was trained with 4x A100 80gb over 36 hours, and used the original training script from WizardLM team. Today I released an uncensored version of the WizardLM model. Please remember to follow Reddit's Content Policy. Avoid presenting misinformation as factual. Avoid straw-manning and bad-faith interpretations. Treat other users the way you want to be treated. Posters and commenters are expected to act in good faith. Links must be directly to the source, such as GitHub or Hugging Face. The 1/10th rule is a good guideline: self-promotion should not be more than 10% of your content here.Īdditionally, if you are sharing your or someone else's project, please do not use any sensationalized titles, and do not use any affiliate links when linking to content. This is an open community that highly encourages collaborative resource sharing, but the sub is not here as merely a source for free advertisement. llama.cpp is here and text generation web UI is here The problem you're having may already have a documented fix. If you're receiving errors when running something, the first place to search is the issues page for the repository. This mainly includes questions that are very simple and can be answered with basic research, like "How do I install this?" or "Where can I find models?" Posts must be directly related to LLaMA or the topic of LLMs. ![]()
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |