此页面所有软件内容、截图、价格、介绍等均来源于互联网,地址均为第三方提供,请谨慎下载。



apache-2.0

wav2vec2-large-xlsr-300-arabic

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4514
  • Wer: 0.4256
  • Cer: 0.1528

Evaluation Commands

  1. To evaluate on mozilla-foundation/common_voice_7_0 with split test
python eval.py --model_id kingabzpro/wav2vec2-large-xlsr-300-arabic --dataset mozilla-foundation/common_voice_7_0 --config ur --split test

Inference With LM

import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "kingabzpro/wav2vec2-large-xlsr-300-arabic"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "ar", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
    logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 64
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Framework versions

  • Transformers 4.17.0.dev0
  • Pytorch 1.10.2 cu102
  • Datasets 1.18.2.dev0
  • Tokenizers 0.11.0

网友提问

温馨提示! 即将跳转到 第三方 网站下载具体内容

点赞(57) 打赏

微信小程序

微信扫一扫体验

立即
投稿

微信公众账号

微信扫一扫加关注

发表
评论
返回
顶部