PEFT
Edit model card

Introduction

The paper explores the capabilities of Large Language Models (LLMs) like LLaMA in syntactic parsing tasks. We introduce U-DepPLLaMA, a novel architecture that treats Dependency Parsing as a sequence-to-sequence problem, achieving state-of-the-art results in 26 languages from the Universal Dependency Treebank. Our approach demonstrates that LLMs can handle dependency parsing without the need for specialized architectures, showing robust performance even with complex sentence structures. The paper is available here.

For more details, please consult the associated Github repository.

This model comes in two sizes:

How to use it

import transformers
import torch
from transformers import BitsAndBytesConfig, AutoModelForCausalLM, AutoTokenizer, GenerationConfig
from peft import PeftModel

quant_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_use_double_quant=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16
)

model = AutoModelForCausalLM.from_pretrained(
            "meta-llama/Llama-2-7b-hf",
            load_in_4bit=True,
            quantization_config=quant_config,
            torch_dtype=torch.float16,
            trust_remote_code=True,
            device_map={"": 0},
        )
model = PeftModel.from_pretrained(
            model,
            "sag-uniroma2/u-depp-llama-2-7b"
        )

generation_config = GenerationConfig(
        num_beams=4,
        do_sample=False,
        early_stopping=True,
    )

tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf", trust_remote_code=True)

input_string = "He was most widely recognized for some of his books."
prompt = f"""
### Input:
{input_string}
### Answer:"""

inputs = tokenizer(prompt, return_tensors="pt", padding=True, truncation=True, max_length=512)
input_ids = inputs["input_ids"].to(model.device)

with torch.no_grad():
    gen_outputs = model.generate(
        input_ids=input_ids,
        generation_config=generation_config,
        return_dict_in_generate=True,
        output_scores=True,
        max_new_tokens=1024,
        use_cache=True,
    )
s = gen_outputs.sequences[0]
output = tokenizer.decode(s, skip_special_tokens=True)

response = output.split("### Answer:")[1].rstrip().lstrip()
print(response)

Citation

@article{hromei2024udeppllama,
  author  = "Hromei, Claudiu Daniel and Croce, Danilo and Basili, Roberto",
  title   = "U-DepPLLaMA: Universal Dependency Parsing via Auto-regressive Large Language Models",
  journal = "IJCoL",
  year    = 2024,
  volume  = "10",
  number  = "1",
  pages   = "21--38"
}
Downloads last month
6
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for sag-uniroma2/u-depp-llama-2-7b

Adapter
(125)
this model

Dataset used to train sag-uniroma2/u-depp-llama-2-7b