Blank output of deepseek

I am using deepseek r1 for a project. The prompt is large and model takes around 5-6 minutes for the complete response. I found that sometimes it giving me blank string as output instead of OpenAI response type output.
The same prompt is sometimes giving blank string as output and the correct response at other times.
I am using
client.chat.completions.create(
model=“deepseek-ai/deepseek-r1”,
messages=[
{“role”: “system”, “content”: “You are a software developer.”},
{“role”: “user”, “content”: },
],
temperature=,
top_p=,
)

Has anybody faced similar thing?

I want to call NVIDIA DEEPSEEK through API. How to set the API url and model name?

Hi @gefei2 - please see the right hand panel on this page. deepseek-r1 Model by Deepseek-ai | NVIDIA NIM It gives the code for calling the API through python, langchain, node or shell script.

Python example:

from openai import OpenAI

client = OpenAI(
  base_url = "https://integrate.api.nvidia.com/v1",
  api_key = "$API_KEY_REQUIRED_IF_EXECUTING_OUTSIDE_NGC"
)

completion = client.chat.completions.create(
  model="deepseek-ai/deepseek-r1",
  messages=[{"role":"user","content":"Which number is larger, 9.11 or 9.8?"}],
  temperature=0.6,
  top_p=0.7,
  max_tokens=4096,
  stream=True
)

for chunk in completion:
  if chunk.choices[0].delta.content is not None:
    print(chunk.choices[0].delta.content, end="")

Yes, this behavior can indeed happen when working with large models, especially if:

The prompt is too long or the token limit has been exceeded — sometimes an empty response may be returned instead of an error.

The server is overloaded or unstable — the model may simply not have time to form a response and return an empty string.

The temperature and top_p parameters are too strict — if the values ​​are low, the model may “get stuck” or not decide to generate a response.

What you can try:

Make sure that the total size of the prompt + expected response does not exceed the model’s token limit (usually around 8k–16k for modern models).

Try setting temperature=0.7 and top_p=0.9 temporarily if you are using more extreme values.

Add a simple try/except block to retry the request in case of an empty response.

Check if an empty string or None is returned, and log the entire response to make sure that the issue is not with parsing.

Make sure you are using an up-to-date client version and a stable connection.

If the behavior is repeated specifically with DeepSeek R1, it may be a limitation or a bug in a specific implementation. In this case, it is worth creating an issue on the project’s GitHub - similar cases are already being discussed there.