Ai Model
sileod/deberta-v3-base-tasksource-nli
Package Version Scores
Overall
8
/10
Security
8
Activity
9
Popularity
8
Quality
8
Quality
Pull Requests from Bots
Pull requests from bot accounts indicate that the project is using automation for development tasks.
Quality
Pull Requests from Bots
Pull requests from bot accounts indicate that the project is using automation for development tasks.
Model Metadata
Model Size:
184424451
Tensor Type:
F32
Author:
sileod
Last Modified:
Created:
Requires Authentication:
false
Gated:
false
Downloads:
10782
Library:
transformers
Likes:
133
Spaces Using Model:
awacke1/sileod-deberta-v3-base-tasksource-nli, ceckenrode/sileod-deberta-v3-base-tasksource-nli, keneonyeachonam/sileod-deberta-v3-base-tasksource-nli-021423, awacke1/sileod-deberta-v3-base-tasksource-nli-2, qtoino/form_matcher, Ryann/sileod-deberta-v3-base-tasksource-nli, furqankassa/sileod-deberta-v3-base-tasksource-nli, JDB62/sileod-deberta-v3-base-tasksource-nli, jbraun19/sileod-deberta-v3-base-tasksource-nli, bertugmirasyedi/aristotle-api, RangiLyu/sileod-deberta-v3-base-tasksource-nli, jakubz86/sileod-deberta-v3-base-tasksource-nli, aisafe/FACTOID, AhmedMagdy7/sileod-deberta-v3-base-tasksource-nli, Akay2024/sileod-deberta-v3-base-tasksource-nli, Towhidul/check, pepsinb/sileod-deberta-v3-base-tasksource-nli, arnabdhar/Zero-Shot-Classification-DeBERTa-Quantized, marklhq/sileod-deberta-v3-base-tasksource-nli, AaA-tips369/sileod-deberta-v3-base-tasksource-nli
Tags:
transformers, pytorch, safetensors, deberta-v2, text-classification, deberta-v3-base, deberta-v3, deberta, nli, natural-language-inference, multitask, multi-task, pipeline, extreme-multi-task, extreme-mtl, tasksource, zero-shot, rlhf, zero-shot-classification, en, dataset:glue, dataset:nyu-mll/multi_nli, dataset:multi_nli, dataset:super_glue, dataset:anli, dataset:tasksource/babi_nli, dataset:sick, dataset:snli, dataset:scitail, dataset:OpenAssistant/oasst1, dataset:universal_dependencies, dataset:hans, dataset:qbao775/PARARULE-Plus, dataset:alisawuffles/WANLI, dataset:metaeval/recast, dataset:sileod/probability_words_nli, dataset:joey234/nan-nli, dataset:pietrolesci/nli_fever, dataset:pietrolesci/breaking_nli, dataset:pietrolesci/conj_nli, dataset:pietrolesci/fracas, dataset:pietrolesci/dialogue_nli, dataset:pietrolesci/mpe, dataset:pietrolesci/dnc, dataset:pietrolesci/gpt3_nli, dataset:pietrolesci/recast_white, dataset:pietrolesci/joci, dataset:martn-nguyen/contrast_nli, dataset:pietrolesci/robust_nli, dataset:pietrolesci/robust_nli_is_sd, dataset:pietrolesci/robust_nli_li_ts, dataset:pietrolesci/gen_debiased_nli, dataset:pietrolesci/add_one_rte, dataset:metaeval/imppres, dataset:pietrolesci/glue_diagnostics, dataset:hlgd, dataset:PolyAI/banking77, dataset:paws, dataset:quora, dataset:medical_questions_pairs, dataset:conll2003, dataset:nlpaueb/finer-139, dataset:Anthropic/hh-rlhf, dataset:Anthropic/model-written-evals, dataset:truthful_qa, dataset:nightingal3/fig-qa, dataset:tasksource/bigbench, dataset:blimp, dataset:cos_e, dataset:cosmos_qa, dataset:dream, dataset:openbookqa, dataset:qasc, dataset:quartz, dataset:quail, dataset:head_qa, dataset:sciq, dataset:social_i_qa, dataset:wiki_hop, dataset:wiqa, dataset:piqa, dataset:hellaswag, dataset:pkavumba/balanced-copa, dataset:12ml/e-CARE, dataset:art, dataset:tasksource/mmlu, dataset:winogrande, dataset:codah, dataset:ai2_arc, dataset:definite_pronoun_resolution, dataset:swag, dataset:math_qa, dataset:metaeval/utilitarianism, dataset:mteb/amazon_counterfactual, dataset:SetFit/insincere-questions, dataset:SetFit/toxic_conversations, dataset:turingbench/TuringBench, dataset:trec, dataset:tals/vitaminc, dataset:hope_edi, dataset:strombergnlp/rumoureval_2019, dataset:ethos, dataset:tweet_eval, dataset:discovery, dataset:pragmeval, dataset:silicone, dataset:lex_glue, dataset:papluca/language-identification, dataset:imdb, dataset:rotten_tomatoes, dataset:ag_news, dataset:yelp_review_full, dataset:financial_phrasebank, dataset:poem_sentiment, dataset:dbpedia_14, dataset:amazon_polarity, dataset:app_reviews, dataset:hate_speech18, dataset:sms_spam, dataset:humicroedit, dataset:snips_built_in_intents, dataset:banking77, dataset:hate_speech_offensive, dataset:yahoo_answers_topics, dataset:pacovaldez/stackoverflow-questions, dataset:zapsdcn/hyperpartisan_news, dataset:zapsdcn/sciie, dataset:zapsdcn/citation_intent, dataset:go_emotions, dataset:allenai/scicite, dataset:liar, dataset:relbert/lexical_relation_classification, dataset:metaeval/linguisticprobing, dataset:tasksource/crowdflower, dataset:metaeval/ethics, dataset:emo, dataset:google_wellformed_query, dataset:tweets_hate_speech_detection, dataset:has_part, dataset:wnut_17, dataset:ncbi_disease, dataset:acronym_identification, dataset:jnlpba, dataset:species_800, dataset:SpeedOfMagic/ontonotes_english, dataset:blog_authorship_corpus, dataset:launch/open_question_type, dataset:health_fact, dataset:commonsense_qa, dataset:mc_taco, dataset:ade_corpus_v2, dataset:prajjwal1/discosense, dataset:circa, dataset:PiC/phrase_similarity, dataset:copenlu/scientific-exaggeration-detection, dataset:quarel, dataset:mwong/fever-evidence-related, dataset:numer_sense, dataset:dynabench/dynasent, dataset:raquiba/Sarcasm_News_Headline, dataset:sem_eval_2010_task_8, dataset:demo-org/auditor_review, dataset:medmcqa, dataset:aqua_rat, dataset:RuyuanWan/Dynasent_Disagreement, dataset:RuyuanWan/Politeness_Disagreement, dataset:RuyuanWan/SBIC_Disagreement, dataset:RuyuanWan/SChem_Disagreement, dataset:RuyuanWan/Dilemmas_Disagreement, dataset:lucasmccabe/logiqa, dataset:wiki_qa, dataset:metaeval/cycic_classification, dataset:metaeval/cycic_multiplechoice, dataset:metaeval/sts-companion, dataset:metaeval/commonsense_qa_2.0, dataset:metaeval/lingnli, dataset:metaeval/monotonicity-entailment, dataset:metaeval/arct, dataset:metaeval/scinli, dataset:metaeval/naturallogic, dataset:onestop_qa, dataset:demelin/moral_stories, dataset:corypaik/prost, dataset:aps/dynahate, dataset:metaeval/syntactic-augmentation-nli, dataset:metaeval/autotnli, dataset:lasha-nlp/CONDAQA, dataset:openai/webgpt_comparisons, dataset:Dahoas/synthetic-instruct-gptj-pairwise, dataset:metaeval/scruples, dataset:metaeval/wouldyourather, dataset:sileod/attempto-nli, dataset:metaeval/defeasible-nli, dataset:metaeval/help-nli, dataset:metaeval/nli-veridicality-transitivity, dataset:metaeval/natural-language-satisfiability, dataset:metaeval/lonli, dataset:tasksource/dadc-limit-nli, dataset:ColumbiaNLP/FLUTE, dataset:metaeval/strategy-qa, dataset:openai/summarize_from_feedback, dataset:tasksource/folio, dataset:metaeval/tomi-nli, dataset:metaeval/avicenna, dataset:stanfordnlp/SHP, dataset:GBaker/MedQA-USMLE-4-options-hf, dataset:GBaker/MedQA-USMLE-4-options, dataset:sileod/wikimedqa, dataset:declare-lab/cicero, dataset:amydeng2000/CREAK, dataset:metaeval/mutual, dataset:inverse-scaling/NeQA, dataset:inverse-scaling/quote-repetition, dataset:inverse-scaling/redefine-math, dataset:tasksource/puzzte, dataset:metaeval/implicatures, dataset:race, dataset:metaeval/spartqa-yn, dataset:metaeval/spartqa-mchoice, dataset:metaeval/temporal-nli, dataset:metaeval/ScienceQA_text_only, dataset:AndyChiang/cloth, dataset:metaeval/logiqa-2.0-nli, dataset:tasksource/oasst1_dense_flat, dataset:metaeval/boolq-natural-perturbations, dataset:metaeval/path-naturalness-prediction, dataset:riddle_sense, dataset:Jiangjie/ekar_english, dataset:metaeval/implicit-hate-stg1, dataset:metaeval/chaos-mnli-ambiguity, dataset:IlyaGusev/headline_cause, dataset:metaeval/race-c, dataset:metaeval/equate, dataset:metaeval/ambient, dataset:AndyChiang/dgen, dataset:metaeval/clcd-english, dataset:civil_comments, dataset:metaeval/acceptability-prediction, dataset:maximedb/twentyquestions, dataset:metaeval/counterfactually-augmented-snli, dataset:tasksource/I2D2, dataset:sileod/mindgames, dataset:metaeval/counterfactually-augmented-imdb, dataset:metaeval/cnli, dataset:metaeval/reclor, dataset:tasksource/oasst1_pairwise_rlhf_reward, dataset:tasksource/zero-shot-label-nli, dataset:webis/args_me, dataset:webis/Touche23-ValueEval, dataset:tasksource/starcon, dataset:tasksource/ruletaker, dataset:lighteval/lsat_qa, dataset:tasksource/ConTRoL-nli, dataset:tasksource/tracie, dataset:tasksource/sherliic, dataset:tasksource/sen-making, dataset:tasksource/winowhy, dataset:mediabiasgroup/mbib-base, dataset:tasksource/robustLR, dataset:CLUTRR/v1, dataset:tasksource/logical-fallacy, dataset:tasksource/parade, dataset:tasksource/cladder, dataset:tasksource/subjectivity, dataset:tasksource/MOH, dataset:tasksource/VUAC, dataset:tasksource/TroFi, dataset:sharc_modified, dataset:tasksource/conceptrules_v2, dataset:tasksource/disrpt, dataset:conll2000, dataset:DFKI-SLT/few-nerd, dataset:tasksource/com2sense, dataset:tasksource/scone, dataset:tasksource/winodict, dataset:tasksource/fool-me-twice, dataset:tasksource/monli, dataset:tasksource/corr2cause, dataset:tasksource/apt, dataset:zeroshot/twitter-financial-news-sentiment, dataset:tasksource/icl-symbol-tuning-instruct, dataset:tasksource/SpaceNLI, dataset:sihaochen/propsegment, dataset:HannahRoseKirk/HatemojiBuild, dataset:tasksource/regset, dataset:lmsys/chatbot_arena_conversations, dataset:tasksource/nlgraph, arxiv:2301.05948, license:apache-2.0, model-index, endpoints_compatible, deploy:azure, region:us
Secure your AI-native apps
Map your AI attack surface
Map AI models, APIs, and dependencies
Continuously assess AI models for risks
Enforce policies for secure AI adoption
References
Basic Information
Release Date
License
apache-2.0
Secure your AI-native apps
Discover and assess all AI models, APIs, and dependencies in your code.
Map your AI attack surface