Ai Model
tasksource/deberta-small-long-nli
Package Version Scores
Overall
7
/10
Security
8
Activity
7
Popularity
7
Quality
8
Quality
Pull Requests from Bots
Pull requests from bot accounts indicate that the project is using automation for development tasks.
Quality
Pull Requests from Bots
Pull requests from bot accounts indicate that the project is using automation for development tasks.
Model Metadata
Model Size:
141897219
Tensor Type:
F32
Author:
tasksource
Last Modified:
Created:
Requires Authentication:
false
Gated:
false
Downloads:
20562
Library:
transformers
Likes:
47
Spaces Using Model:
Baraaqasem/tasksource-deberta-small-long-nli
Tags:
transformers, pytorch, safetensors, deberta-v2, text-classification, deberta-v3-small, deberta-v3, deberta, nli, natural-language-inference, multitask, multi-task, pipeline, extreme-multi-task, extreme-mtl, tasksource, zero-shot, rlhf, zero-shot-classification, en, dataset:nyu-mll/glue, dataset:aps/super_glue, dataset:facebook/anli, dataset:tasksource/babi_nli, dataset:sick, dataset:snli, dataset:scitail, dataset:hans, dataset:alisawuffles/WANLI, dataset:tasksource/recast, dataset:sileod/probability_words_nli, dataset:joey234/nan-nli, dataset:pietrolesci/nli_fever, dataset:pietrolesci/breaking_nli, dataset:pietrolesci/conj_nli, dataset:pietrolesci/fracas, dataset:pietrolesci/dialogue_nli, dataset:pietrolesci/mpe, dataset:pietrolesci/dnc, dataset:pietrolesci/recast_white, dataset:pietrolesci/joci, dataset:pietrolesci/robust_nli, dataset:pietrolesci/robust_nli_is_sd, dataset:pietrolesci/robust_nli_li_ts, dataset:pietrolesci/gen_debiased_nli, dataset:pietrolesci/add_one_rte, dataset:tasksource/imppres, dataset:hlgd, dataset:paws, dataset:medical_questions_pairs, dataset:Anthropic/model-written-evals, dataset:truthful_qa, dataset:nightingal3/fig-qa, dataset:tasksource/bigbench, dataset:blimp, dataset:cos_e, dataset:cosmos_qa, dataset:dream, dataset:openbookqa, dataset:qasc, dataset:quartz, dataset:quail, dataset:head_qa, dataset:sciq, dataset:social_i_qa, dataset:wiki_hop, dataset:wiqa, dataset:piqa, dataset:hellaswag, dataset:pkavumba/balanced-copa, dataset:12ml/e-CARE, dataset:art, dataset:winogrande, dataset:codah, dataset:ai2_arc, dataset:definite_pronoun_resolution, dataset:swag, dataset:math_qa, dataset:metaeval/utilitarianism, dataset:mteb/amazon_counterfactual, dataset:SetFit/insincere-questions, dataset:SetFit/toxic_conversations, dataset:turingbench/TuringBench, dataset:trec, dataset:tals/vitaminc, dataset:hope_edi, dataset:strombergnlp/rumoureval_2019, dataset:ethos, dataset:tweet_eval, dataset:discovery, dataset:pragmeval, dataset:silicone, dataset:lex_glue, dataset:papluca/language-identification, dataset:imdb, dataset:rotten_tomatoes, dataset:ag_news, dataset:yelp_review_full, dataset:financial_phrasebank, dataset:poem_sentiment, dataset:dbpedia_14, dataset:amazon_polarity, dataset:app_reviews, dataset:hate_speech18, dataset:sms_spam, dataset:humicroedit, dataset:snips_built_in_intents, dataset:hate_speech_offensive, dataset:yahoo_answers_topics, dataset:pacovaldez/stackoverflow-questions, dataset:zapsdcn/hyperpartisan_news, dataset:zapsdcn/sciie, dataset:zapsdcn/citation_intent, dataset:go_emotions, dataset:allenai/scicite, dataset:liar, dataset:relbert/lexical_relation_classification, dataset:tasksource/linguisticprobing, dataset:tasksource/crowdflower, dataset:metaeval/ethics, dataset:emo, dataset:google_wellformed_query, dataset:tweets_hate_speech_detection, dataset:has_part, dataset:blog_authorship_corpus, dataset:launch/open_question_type, dataset:health_fact, dataset:commonsense_qa, dataset:mc_taco, dataset:ade_corpus_v2, dataset:prajjwal1/discosense, dataset:circa, dataset:PiC/phrase_similarity, dataset:copenlu/scientific-exaggeration-detection, dataset:quarel, dataset:mwong/fever-evidence-related, dataset:numer_sense, dataset:dynabench/dynasent, dataset:raquiba/Sarcasm_News_Headline, dataset:sem_eval_2010_task_8, dataset:demo-org/auditor_review, dataset:medmcqa, dataset:RuyuanWan/Dynasent_Disagreement, dataset:RuyuanWan/Politeness_Disagreement, dataset:RuyuanWan/SBIC_Disagreement, dataset:RuyuanWan/SChem_Disagreement, dataset:RuyuanWan/Dilemmas_Disagreement, dataset:lucasmccabe/logiqa, dataset:wiki_qa, dataset:tasksource/cycic_classification, dataset:tasksource/cycic_multiplechoice, dataset:tasksource/sts-companion, dataset:tasksource/commonsense_qa_2.0, dataset:tasksource/lingnli, dataset:tasksource/monotonicity-entailment, dataset:tasksource/arct, dataset:tasksource/scinli, dataset:tasksource/naturallogic, dataset:onestop_qa, dataset:demelin/moral_stories, dataset:corypaik/prost, dataset:aps/dynahate, dataset:metaeval/syntactic-augmentation-nli, dataset:tasksource/autotnli, dataset:lasha-nlp/CONDAQA, dataset:openai/webgpt_comparisons, dataset:Dahoas/synthetic-instruct-gptj-pairwise, dataset:metaeval/scruples, dataset:metaeval/wouldyourather, dataset:metaeval/defeasible-nli, dataset:tasksource/help-nli, dataset:metaeval/nli-veridicality-transitivity, dataset:tasksource/lonli, dataset:tasksource/dadc-limit-nli, dataset:ColumbiaNLP/FLUTE, dataset:tasksource/strategy-qa, dataset:openai/summarize_from_feedback, dataset:tasksource/folio, dataset:yale-nlp/FOLIO, dataset:tasksource/tomi-nli, dataset:tasksource/avicenna, dataset:stanfordnlp/SHP, dataset:GBaker/MedQA-USMLE-4-options-hf, dataset:sileod/wikimedqa, dataset:declare-lab/cicero, dataset:amydeng2000/CREAK, dataset:tasksource/mutual, dataset:inverse-scaling/NeQA, dataset:inverse-scaling/quote-repetition, dataset:inverse-scaling/redefine-math, dataset:tasksource/puzzte, dataset:tasksource/implicatures, dataset:race, dataset:tasksource/race-c, dataset:tasksource/spartqa-yn, dataset:tasksource/spartqa-mchoice, dataset:tasksource/temporal-nli, dataset:riddle_sense, dataset:tasksource/clcd-english, dataset:maximedb/twentyquestions, dataset:metaeval/reclor, dataset:tasksource/counterfactually-augmented-imdb, dataset:tasksource/counterfactually-augmented-snli, dataset:metaeval/cnli, dataset:tasksource/boolq-natural-perturbations, dataset:metaeval/acceptability-prediction, dataset:metaeval/equate, dataset:tasksource/ScienceQA_text_only, dataset:Jiangjie/ekar_english, dataset:tasksource/implicit-hate-stg1, dataset:metaeval/chaos-mnli-ambiguity, dataset:IlyaGusev/headline_cause, dataset:tasksource/logiqa-2.0-nli, dataset:tasksource/oasst2_dense_flat, dataset:sileod/mindgames, dataset:metaeval/ambient, dataset:metaeval/path-naturalness-prediction, dataset:civil_comments, dataset:AndyChiang/cloth, dataset:AndyChiang/dgen, dataset:tasksource/I2D2, dataset:webis/args_me, dataset:webis/Touche23-ValueEval, dataset:tasksource/starcon, dataset:PolyAI/banking77, dataset:tasksource/ConTRoL-nli, dataset:tasksource/tracie, dataset:tasksource/sherliic, dataset:tasksource/sen-making, dataset:tasksource/winowhy, dataset:tasksource/robustLR, dataset:CLUTRR/v1, dataset:tasksource/logical-fallacy, dataset:tasksource/parade, dataset:tasksource/cladder, dataset:tasksource/subjectivity, dataset:tasksource/MOH, dataset:tasksource/VUAC, dataset:tasksource/TroFi, dataset:sharc_modified, dataset:tasksource/conceptrules_v2, dataset:metaeval/disrpt, dataset:tasksource/zero-shot-label-nli, dataset:tasksource/com2sense, dataset:tasksource/scone, dataset:tasksource/winodict, dataset:tasksource/fool-me-twice, dataset:tasksource/monli, dataset:tasksource/corr2cause, dataset:lighteval/lsat_qa, dataset:tasksource/apt, dataset:zeroshot/twitter-financial-news-sentiment, dataset:tasksource/icl-symbol-tuning-instruct, dataset:tasksource/SpaceNLI, dataset:sihaochen/propsegment, dataset:HannahRoseKirk/HatemojiBuild, dataset:tasksource/regset, dataset:tasksource/esci, dataset:lmsys/chatbot_arena_conversations, dataset:neurae/dnd_style_intents, dataset:hitachi-nlp/FLD.v2, dataset:tasksource/SDOH-NLI, dataset:allenai/scifact_entailment, dataset:tasksource/feasibilityQA, dataset:tasksource/simple_pair, dataset:tasksource/AdjectiveScaleProbe-nli, dataset:tasksource/resnli, dataset:tasksource/SpaRTUN, dataset:tasksource/ReSQ, dataset:tasksource/semantic_fragments_nli, dataset:MoritzLaurer/dataset_train_nli, dataset:tasksource/stepgame, dataset:tasksource/nlgraph, dataset:tasksource/oasst2_pairwise_rlhf_reward, dataset:tasksource/hh-rlhf, dataset:tasksource/ruletaker, dataset:qbao775/PARARULE-Plus, dataset:tasksource/proofwriter, dataset:tasksource/logical-entailment, dataset:tasksource/nope, dataset:tasksource/LogicNLI, dataset:kiddothe2b/contract-nli, dataset:AshtonIsNotHere/nli4ct_semeval2024, dataset:tasksource/lsat-ar, dataset:tasksource/lsat-rc, dataset:AshtonIsNotHere/biosift-nli, dataset:tasksource/brainteasers, dataset:Anthropic/persuasion, dataset:erbacher/AmbigNQ-clarifying-question, dataset:tasksource/SIGA-nli, dataset:unigram/FOL-nli, dataset:tasksource/goal-step-wikihow, dataset:GGLab/PARADISE, dataset:tasksource/doc-nli, dataset:tasksource/mctest-nli, dataset:tasksource/patent-phrase-similarity, dataset:tasksource/natural-language-satisfiability, dataset:tasksource/idioms-nli, dataset:tasksource/lifecycle-entailment, dataset:nvidia/HelpSteer, dataset:nvidia/HelpSteer2, dataset:sadat2307/MSciNLI, dataset:pushpdeep/UltraFeedback-paired, dataset:tasksource/AES2-essay-scoring, dataset:tasksource/english-grading, dataset:tasksource/wice, dataset:Dzeniks/hover, dataset:sileod/missing-item-prediction, dataset:tasksource/tasksource_dpo_pairs, arxiv:2301.05948, base_model:microsoft/deberta-v3-small, base_model:finetune:microsoft/deberta-v3-small, license:apache-2.0, endpoints_compatible, region:us
Secure your AI-native apps
Map your AI attack surface
Map AI models, APIs, and dependencies
Continuously assess AI models for risks
Enforce policies for secure AI adoption
References
Basic Information
Release Date
License
apache-2.0
Secure your AI-native apps
Discover and assess all AI models, APIs, and dependencies in your code.
Map your AI attack surface