![]() ![]() ![]() LLaMA models are not finetuned for question answering. Like other large language models, LLaMA works by taking a sequence of words as an input and predicts a next word to recursively generate text.Ī: No. Update: This guide continues to be up to date, but a new Simplified Install Guide has been created, including text generation web UI one-click installers, llama.cpp, and usage with AMD: FAQĪ: LLaMA (Large Language Model Meta AI) is a foundational large language model designed primarily for researchers. Please remember to follow Reddit's Content Policy. Avoid presenting misinformation as factual. Avoid straw-manning and bad-faith interpretations. Treat other users the way you want to be treated. Posters and commenters are expected to act in good faith. Links must be directly to the source, such as GitHub or Hugging Face. The 1/10th rule is a good guideline: self-promotion should not be more than 10% of your content here.Īdditionally, if you are sharing your or someone else's project, please do not use any sensationalized titles, and do not use any affiliate links when linking to content. This is an open community that highly encourages collaborative resource sharing, but the sub is not here as merely a source for free advertisement. ![]() llama.cpp is here and text generation web UI is here The problem you're having may already have a documented fix. If you're receiving errors when running something, the first place to search is the issues page for the repository. This mainly includes questions that are very simple and can be answered with basic research, like "How do I install this?" or "Where can I find models?" Posts must be directly related to LLaMA or the topic of LLMs. ![]()
0 Comments
Leave a Reply. |