AIFactory

팔로워 아이콘

팔로워5

대회 아이콘

주최대회10

팔로우
총 169

AIFactory

follower1

팔로우
프로필 보기

AIFactory

2024.04.29 05:34

0
0
374
인공지능팩토리 AI 챗봇 'WERT', 대원제약과 구독 서비스 계약
 인공지능팩토리가 대원제약에 구독형식의 WERT(베어트) 공급계약을 성공적으로 완료했습니다. WERT(베어트)는 원하는 형태와 방식으로 맞춤형 커스터마이징이 가능한 구독형 인공지능 챗봇 서비스로지식기반형, 태스크 연동형, 리포트 생성형 총 3가지 유형으로 구성되어 있습니다. 자세한 정보 보기 해당 계약은 인공지능팩토리의 제약분야 첫 공급 사례로 대원제약에서는 인공지능 챗봇 서비스(WERT)의 질문응답 기능을 통해회사 내부의 자주 묻는 질문에 대한 빠른 대응을 통해 업무 효율을 개선하고자 도입을 결정했습니다.회사 내부 문서와 자료를 기준으로 원하는 내용을 정확히 찾고 답할 수 있는 점에서 중요한 역할을 했다고 보고 있습니다. 이번 계약 뿐만 아니라 앞으로 금융이나 헬스키어 분야와 같은 복잡하고 방대한 내부문서가 많은 산업군에서 점진적인 시장확대를 기대하고 있습니다. 솔루션 문의: cs@aifactory.page 뉴스룸[인공지능팩토리, 대원제약과 ‘구독 AI 챗봇서비스’ 계약 체결] 벤쳐스퀘어[인공지능팩토리, 제약 산업에 첫 AI 챗봇 서비스 ‘WERT’ 공급 계약 체결] 헬로티[인공지능팩토리, AI 챗봇 ‘WERT’, 대원제약과 구독 서비스 계약] 한국투데이 관련기사[인공지능팩토리, AI 챗봇 서비스 ‘WERT (베어트)’ 출시] 케이벤치[인공지능팩토리, 3가지 유형의 AI 챗봇 서비스 ‘WERT(베어트)’ 출시] CIO Korea[인공지능팩토리, 구독형 AI 챗봇 서비스 ‘베어트’ 공식 출시] 벤처스퀘어[인공지능팩토리, 구독형 AI 챗봇 서비스 ‘베어트(WERT)’ 출시] ITWorld[인공지능팩토리, AI챗봇 서비스 ‘WERT(베어트)’ 출시…3가지 유형] 인사이드 비나[인공지능팩토리, AI 챗봇 서비스 ‘WERT(베어트)’ 출시] 한국투데이[인공지능팩토리, AI 챗봇 서비스 ‘WERT(베어트)’ 출시] 일간투데이[인공지능팩토리, AI 챗봇 서비스 ‘WERT(베어트)’ 출시] 미디어원[인공지능팩토리, AI 챗봇 서비스 ‘WERT(베어트)’ 출시] 보드나라

BlessingDev

follower0

팔로우
프로필 보기

BlessingDev

2024.04.23 00:25

0
0
10
Dataset for Rewriting Prompt
Since This competition doesn't provide any avilable dataset, participants should generate them to finetune the model.Fortunately, few foregoers generated and shared some. Here is the list. LLM Prompt Recovery - Synthetic Datastore Link: LLM Prompt Recovery - Synthetic Datastore (kaggle.com) A dataset generated by Gemma 7B-it. Inspired by thedrcat's dataset - LLM Prompt Recovery Data3000 Rewritten texts - Prompt recovery Challenge Link: 3000 Rewritten texts - Prompt recovery Challenge (kaggle.com) Prompts created by ChatGPT-4. Text rewritten by gemma-7B-it.gemma-rewrite-nbroad Link: gemma-rewrite-nbroad (kaggle.com) Prompt generated by ChatGPT. Essay generated by gemma-7B-it.

augi_kky

follower0

팔로우
프로필 보기

augi_kky

2024.04.23 00:23

0
0
5
What is mean prompting?
Mean PromptingMean prompting is a technique used in natural language processing (NLP) and machine learning, particularly in the context of language generation models. It involves providing a model with a prompt or input that represents the desired output's mean or average characteristics.Here's how mean prompting typically works:Definition:Mean prompting involves constructing a prompt that encapsulates the average or typical features of the desired output. This prompt serves as guidance for the model to generate outputs that align with the specified characteristics.Application:Mean prompting is commonly used in text generation tasks, such as generating product descriptions, summaries, or responses in conversational AI systems.For instance, in a summarization task, the mean prompt might include key points or representative phrases extracted from the input text, guiding the model to produce a concise summary that captures the essence of the original content.Implementation:Implementing mean prompting involves designing prompts that strike a balance between specificity and generality. The prompt should provide enough information to guide the model while allowing flexibility for diverse outputs.Techniques such as keyword extraction, sentence compression, or clustering can be employed to distill the input information into a representative prompt.Additionally, fine-tuning or adjusting model parameters may be necessary to ensure that the generated outputs align with the intended characteristics.Benefits:Mean prompting can improve the coherence, relevance, and consistency of generated outputs by providing the model with clear guidance.It can help mitigate issues such as output drift or divergence commonly observed in open-ended language generation tasks.By focusing the model's attention on specific features or attributes, mean prompting can enhance the overall quality of generated content.Challenges:Designing effective mean prompts requires domain knowledge and understanding of the desired output characteristics.Balancing specificity and generality in the prompt design can be challenging, as overly specific prompts may restrict creativity, while overly general prompts may result in vague or irrelevant outputs.Evaluating the effectiveness of mean prompting techniques often involves subjective judgments and may require human annotation or feedback.In summary, mean prompting is a valuable technique in language generation tasks, enabling models to produce outputs that exhibit desired average characteristics. By providing clear guidance to the model, mean prompting enhances the quality and relevance of generated content across various NLP applications.    

augi_kky

follower0

팔로우
프로필 보기

augi_kky

2024.04.23 00:21

0
0
6
fine-tuning? RAG?
Fine-tuning:Fine-tuning involves adapting a pre-trained model to a specific task by retraining it on additional data related to that task, utilizing the existing pre-trained weights.It is typically employed when there is a small dataset available and improved performance is sought.Fine-tuning often involves adjusting hyperparameters such as learning rates and optimization algorithms during the fine-tuning phase.Randomly-initialized Adaptive Gradient (RAG):RAG, developed by OpenAI, is a method particularly suited for generative models like conversational systems. It initializes a new model randomly and adapts it to the task at hand.Instead of utilizing pre-trained weights, RAG updates shared parameters to tailor the model to the specific task.RAG is particularly effective when employed with large datasets, though it demands substantial computational resources, and its efficacy might be limited with smaller datasets.The choice between these methods depends on various factors such as dataset size, task complexity, availability of computational resources, and time constraints. While fine-tuning may be effective with small datasets and limited resources, randomly-initialized approaches like RAG can yield better results when ample computational resources and large datasets are available.

발가락

follower0

팔로우
프로필 보기

발가락

2024.04.23 00:18

0
0
4
Context learning vs fine-tuning.
In-Context Learning vs. Fine-tuning:In-Context Learning (Prompt Learning): Utilizes context within prompts to guide model responses without updating the model itself. This method leverages examples within the prompt to shape output, enhancing flexibility and applicability across various tasks without the need for specific data tuning.Fine-tuning: Involves updating the model with a specific dataset to produce desired outputs, making it effective for specialized tasks but less flexible for changing contexts. Requires substantial time and resources for data collection and labeling, optimizing the model for particular tasks at the expense of general applicability.

발가락

follower0

팔로우
프로필 보기

발가락

2024.04.23 00:17

0
0
4
The limitation of large language models (LLM)
LLMs such as GPT analyze extensive text data to predict and generate text based on statistical patterns. Despite their vast knowledge base, they struggle with contextually appropriate information retrieval. For example, given an anachronistic prompt like "King Sejong using a MacBook," LLMs might generate a historically inaccurate response due to their reliance on statistical probabilities. This phenomenon, known as hallucination, highlights a fundamental issue with GPT-based LLMs, with ongoing mitigation efforts involving fine-tuning and in-context learning.     

kiiae

follower0

팔로우
프로필 보기

kiiae

2024.04.22 18:49

0
0
4
Technique for Enhanced Language Model Performance
While large-language models exhibit impressive zero-shot capabilities, they often struggle with more complex tasks without additional guidance. Few-shot prompting emerges as a solution, enabling in-context learning by providing demonstrations or exemplars in the prompt to steer the model towards better performance. This article explores the concept of few-shot prompting, its effectiveness, and limitations through practical examples and insights from recent research.Few-shot prompting leverages demonstrations or exemplars within prompts to guide language models towards desired responses. Touvron et al. (2023) first observed few-shot properties in models scaled to sufficient sizes, as highlighted by Kaplan et al. (2020). Tips from Min et al. (2022) emphasize the importance of both label space and input text distribution in demonstrations, along with the format used in prompts.Demonstrations in prompts can significantly influence model performance, even when labels are randomly assigned. Despite randomized labels, models can still produce accurate responses, indicating the effectiveness of few-shot prompting techniques. However, for more complex tasks, such as reasoning problems, standard few-shot prompting may fall short in providing reliable responses.Standard few-shot prompting may not suffice for tasks requiring complex reasoning. Adding more examples to prompts may not always improve model performance for intricate tasks. Chain-of-thought (CoT) prompting has gained popularity for addressing complex reasoning tasks by breaking problems down into sequential steps.Few-shot prompting serves as a valuable technique for enhancing language model performance, particularly for tasks where additional context or guidance is beneficial. However, its effectiveness varies depending on the complexity of the task and the adequacy of the provided demonstrations. Understanding the limitations of few-shot prompting can inform the exploration of more advanced prompting techniques, such as chain-of-thought prompting, to tackle increasingly complex tasks.

kiiae

follower0

팔로우
프로필 보기

kiiae

2024.04.22 18:37

0
0
9
Enhancing Language Models with RAG
Retrieval Augmented Generation (RAG) is a revolutionary approach that enhances the capabilities of language models (LLMs) by integrating them with external data sources. By leveraging vector databases, RAG enables LLMs to generate contextually rich responses, addressing limitations related to real-time information access and response accuracy.RAG operates through a streamlined four-step process:Loading a vector database with encoded documents.Encoding queries into vectors using sentence transformers.Retrieving relevant context from the vector database based on the query.Prompting the LLM with context and query to generate informed responses.Building a RAG involves:Creating a vector database using tools like FAISS.Integrating LLMs into the RAG framework.Designing prompt templates to structure input for the LLM.Constructing chains to facilitate data flow between the vector database, retriever, and LLM.RAG empowers LLMs to deliver more accurate and contextually relevant responses by incorporating external data sources. By harnessing the capabilities of RAG, LLMs become versatile tools for a wide range of applications, from providing personalized assistance to facilitating natural language interactions.

whalee

follower0

팔로우
프로필 보기

whalee

2024.04.21 02:03

0
0
4
Choosing Between LoRA/QLoRA for Fine-Tuning LLM
When exploring how to fine-tune large language models (LLMs), I came across two prevalent frameworks: "LoRA" and "QLoRA". Both seem to be at the forefront of current methodologies, compelling me to delve deeper into their functionalities and implications for my project. As a newcomer to the realm of LLMs, the journey to understand these frameworks has been anything but straightforward.LoRA, introduced in a study available at https://arxiv.org/abs/2106.09685, and QLoRA, detailed at https://arxiv.org/abs/2305.14314, each propose unique approaches to model fine-tuning. In an effort to discern which framework might better serve my needs, I referred to a guidance page provided by Google Cloud, which discusses the trade-offs between LoRA and QLoRA. This resource, accessible at https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/lora-qlora?hl=ko, offers insights into how each framework performs across various metrics.According to the guidance, the choice between LoRA and QLoRA hinges on specific needs, summarized as follows:GPU Memory Efficiency: QLoRA is recommended for better utilization of GPU memory.Speed: LoRA offers superior speed during the training process.Cost Efficiency: LoRA is more cost-effective, likely due to its speed advantage.Higher Max Sequence Length: QLoRA supports longer sequence lengths, beneficial for tasks requiring extensive context.Accuracy Improvement: Both frameworks offer similar improvements in accuracy.Higher Batch Size: QLoRA accommodates larger batch sizes, which can enhance training efficiency.Additionally, the guidance notes a practical consideration: the 7B model variant (specifically openLLaMA-7b, not Gemma-7b) with a batch size of 1 fails to train on L4/V100 GPUs, whereas the A100 GPU supports a batch size of 2.Ultimately, the choice between LoRA and QLoRA should align with one's specific project requirements. For those seeking further insights, the following resources provide implementation examples and additional context:Thanks to:An article detailing how to fine-tune Gemma using QLoRA: https://medium.com/@samvardhan777/fine-tune-gemma-using-qlora-️-6b2f2e76dc55.An exploration of the differences between QLoRA and LoRA for fine-tuning LLMs: https://medium.com/@sujathamudadla1213/difference-between-qlora-and-lora-for-fine-tuning-llms-0ea35a195535.

roselyn

follower0

팔로우
프로필 보기

roselyn

2024.04.16 05:28

0
0
10
Prompt Rewriting with Reinforcement Learning
 The PRewrite(Prompt Rewriting with Reinforcement Learning) is an automated method to rewrite an under-optimized prompt to a more effective prompt. It instantiate the prompt rewriter using an LLM. The rewriter LLM is trained using reinforcement learning to optimize the performance on a given down stream task.Main idea is to train a prompt rewriter to rewrite an initial under-optimized prompt to a more effective prompt. The prompt rewriter itself is a LLM, trained using RL to optimize for a downstream task. Shown below, Specifically, given an initial prompt, the prompt rewriter LLM is instructed to generate a rewritten prompt, which in turn is used by the task LLM to generate the final output. Using a reward computed on the final output against the ground-truth output, the rewriter LLM is finetuned with RL. Main contributions:Propose PRewrite, a novel automated prompt engineering approach. It optimizes prompt via rewriting, in an end-to-end manner using rein forcement learning.Develop two rewriting strategies, including one that searches for an optimal rewritten prompt from candidates generated by the RL-trained prompt rewriter. This often further improves the prompt optimization performance.Conduct experiments on diverse bench mark datasets, which testify the effectiveness of PRewrite and demonstrate its state-of-the-art per formance. overview:First, the prompt rewriter takes in an initial prompt p and rewrites it to another prompt p†. The initial prompt is usually crafted manually and can be sub-optimal. Observing the remarkable capability of LLMs, we instruct a LLM  with a meta prompt m for rewriting as follows:Second, the rewritten prompt p† is then used by the task LLM to generate the task output. The task LLM is assumed to be a blackbox accessed via API and can be larger than the rewriter LLM. Third, we compute rewards based on the task output in comparison with the ground-truth output and use reinforcement learning (RL) to finetune the rewriter LLM on a training set. As a result, the rewriter LLM and the rewritten prompt are unlikely to perform well on the downstream task initially. Lastly, we use the RL-trained prompt rewriter to rewrite the initial prompt according to Equation 1 based on two strategies outlined. reference:[2401.08189] PRewrite: Prompt Rewriting with Reinforcement Learning (arxiv.org)