This Bert base NER model is a fine-tuned version of BERT, developed by dslim. This model is specifically trained for Named Entity Recognition (NER) tasks, enabling it to identify and classify entities such as locations, organizations, persons, and miscellaneous entities within text. It was fine-tuned on the English version of the CoNLL-2003 Named Entity Recognition dataset.
The Name Entity Recognition (NER) API endpoint allows you to recognize and extract text entities like location, person, organisation, etc.
Named Entity Recognition is a information extraction task. NER is part of Natural Language Processing (NLP) and involves identifying and categorizing key information (named entities) in text into predefined classes, such as person names, organizations, locations, etc.
AI Endpoints makes it easy, with ready-to-use inference APIs. Discover how to use them:
This NER API is based on an Open-Source model: dslim/bert-base-NER. It takes text as input and returns the recognized entities.
Model configuration:
The NER endpoint offers you an optimized way to classify the tokens in your text. Learn how to use them with the following example:
First install the requests library:
pip install requests
Next, export your access token to the OVH_AI_ENDPOINTS_ACCESS_TOKEN environment variable:
export OVH_AI_ENDPOINTS_ACCESS_TOKEN=<your-access-token>
If you do not have an access token key yet, follow the instructions in the AI Endpoints – Getting Started.
Finally, run the following Python code:
import os
import requests
url = "https://bert-base-ner.endpoints.kepler.ai.cloud.ovh.net/api/text2ner"
text ='John lived in Amsterdam in 1996, but now he is in Paris'
res_dict = {
"text": text,
}
headers = {
"Content-Type": "text/plain",
"Authorization": f"Bearer {os.getenv('OVH_AI_ENDPOINTS_ACCESS_TOKEN')}",
}
response = requests.post(url, data=text, headers=headers)
if response.status_code == 200:
# Handle response
response_data = response.json()
# Parse JSON response
latest_end = 0
for entity in response_data:
entity["start"] = latest_end+res_dict["text"][latest_end:].index(entity["word"])
entity["end"] = entity["start"]+len(entity["word"])
latest_end = entity["end"]
res_dict["entities"] = response_data
print(res_dict)
else:
print("Error:", response.status_code, response.text)
When using AI Endpoints, the following rate limits apply:
If you exceed this limit, a 429 error code will be returned.
If you require higher usage, please get in touch with us to discuss increasing your rate limits.
For a broader overview of AI Endpoints, explore the full AI Endpoints Documentation.
Reach out to our support team or join the OVHcloud Discord #ai-endpoints channel to share your questions, feedback, and suggestions for improving the service, to the team and the community.
New to AI Endpoints? This guide walks you through everything you need to get an access token, call AI models, and integrate AI APIs into your apps with ease.
Start TutorialExplore what AI Endpoints can do. This guide breaks down current features, future roadmap items, and the platform's core capabilities so you know exactly what to expect.
Start TutorialRunning into issues? This guide helps you solve common problems on AI Endpoints, from error codes to unexpected responses. Get quick answers, clear fixes, and helpful tips to keep your projects running smoothly.
Start TutorialLearn how to use Structured Output with OVHcloud AI Endpoints.
Start TutorialLearn how to use Function Calling with OVHcloud AI Endpoints.
Start TutorialLearn how to use OVHcloud AI Endpoints Virtual Models.
Start TutorialUnlock emotional insights from text using AI! This tutorial shows you how to build a Java-based sentiment analyzer with Quarkus and the powerful GoEmotions model.
Start TutorialLearn how to use Structured Output with Java, LangChain4j and OVHcloud AI Endpoints.
Start TutorialLearn how to use Function Calling with Java, LangChain4j and OVHcloud AI Endpoints.
Start TutorialLearn how to use Model Context Protocol (MCP) with Java, LangChain4j and OVHcloud AI Endpoints.
Start Tutorial