Fix AttributeError In Langchain-Cohere Integration

by Admin 51 views
AttributeError: 'ThinkingAssistantMessageResponseContentItem' object has no attribute 'text'

Hey everyone, I ran into a bit of a snag while working with Langchain and Cohere, and I wanted to share the issue and how it manifests. Specifically, it revolves around an AttributeError when using a custom model with the "Thinking" feature in the Cohere SDK.

The Problem

When I try to invoke the chat model initialized using init_chat_model, I get the following error:

AttributeError: 'ThinkingAssistantMessageResponseContentItem' object has no attribute 'text'

This happens specifically when the response.message.content contains a ThinkingAssistantMessageResponseContentItem. The code expects a text attribute on each item in the content, but ThinkingAssistantMessageResponseContentItem doesn't have one, causing the error.

Diving Deep into the Error

The traceback points to line 1171 in langchain_cohere/chat_models.py, within the _generate function of the ChatCohere class:

response.message.content[0].text if response.message.content else ""

This line assumes that every content item has a text attribute, which isn't the case when dealing with ThinkingAssistantMessageResponseContentItem. To really understand this, we need to break down what's happening step by step.

First, the _generate function is responsible for processing the response received from the Cohere API. This response includes a message object, which in turn contains a list of content items. These content items represent different parts of the model's response, such as regular text, thinking steps, or other structured information. When using a custom model that incorporates the "Thinking" feature, the response can include ThinkingAssistantMessageResponseContentItem instances within the content list. These items are meant to represent the model's internal reasoning or thought process.

The issue arises because the code naively assumes that every item in the content list will have a text attribute. This assumption is valid for TextAssistantMessageResponseContentItem, which indeed contains the actual text of the response. However, ThinkingAssistantMessageResponseContentItem is designed to hold other types of data related to the model's thinking process, and it does not have a text attribute. Consequently, when the code tries to access response.message.content[0].text on a ThinkingAssistantMessageResponseContentItem, it raises an AttributeError.

To further clarify, let's consider a simplified example. Suppose the response.message.content looks like this:

[
    ThinkingAssistantMessageResponseContentItem(thinking="Let's analyze the user's request..."),
    TextAssistantMessageResponseContentItem(text="The answer is 42.")
]

In this case, response.message.content[0] is a ThinkingAssistantMessageResponseContentItem instance. When the code tries to access .text on this object, it fails because the ThinkingAssistantMessageResponseContentItem only has a thinking attribute, not a text attribute. This is the root cause of the AttributeError.

This error highlights a potential incompatibility between the Langchain-Cohere integration and custom models that utilize the "Thinking" feature. The integration needs to be more robust in handling different types of content items in the response, and it should not assume that every item has a text attribute.

Example Code

Here's the code I used to reproduce the issue:

model = init_chat_model(
    "custom-model-id",
    model_provider="cohere",
    cohere_api_key="api_key",
    base_url="custom-model-base-url",
)
response = model.invoke([HumanMessage(content="hello")])

System Information

Here's my system setup:

System Information
------------------
> OS:  Windows
> OS Version:  10.0.22631
> Python Version:  3.12.11 (main, Jul 23 2025, 00:32:20) [MSC v.1944 64 bit (AMD64)]

Package Information
-------------------
> langchain_core: 1.1.0
> langchain: 1.1.0
> langchain_community: 0.4.1
> langsmith: 0.4.49
> langchain_anthropic: 1.2.0
> langchain_aws: 1.1.0
> langchain_classic: 1.0.0
> langchain_cohere: 0.5.0
> langchain_mcp_adapters: 0.1.14
> langchain_model_profiles: 0.0.4
> langchain_openai: 1.1.0
> langchain_text_splitters: 1.0.0
> langgraph_sdk: 0.2.9

Optional packages not installed
-------------------------------
> langserve

Other Dependencies
------------------
> aiohttp: 3.13.0
> anthropic: 0.75.0
> beautifulsoup4: 4.14.2
> boto3: 1.40.43
> cohere: 5.20.0
> dataclasses-json: 0.6.7
> httpx: 0.28.1
> httpx-sse: 0.4.0
> jsonpatch: 1.33
> langgraph: 1.0.4
> mcp: 1.15.0
> numpy: 2.3.3
> openai: 1.109.1
> opentelemetry-api: 1.37.0
> opentelemetry-exporter-otlp-proto-http: 1.37.0
> opentelemetry-sdk: 1.37.0
> orjson: 3.11.3
> packaging: 25.0
> playwright: 1.55.0
> pydantic: 2.11.9
> pydantic-settings: 2.11.0
> pytest: 8.4.2
> pyyaml: 6.0.3
> PyYAML: 6.0.3
> requests: 2.32.5
> requests-toolbelt: 1.0.0
> rich: 14.2.0
> sqlalchemy: 2.0.44
> SQLAlchemy: 2.0.44
> tenacity: 9.1.2
> tiktoken: 0.11.0
> types-pyyaml: 6.0.12.20250915
> typing-extensions: 4.15.0
> zstandard: 0.25.0

Possible Solutions

To address this issue, a potential solution involves modifying the _generate function to handle ThinkingAssistantMessageResponseContentItem instances gracefully. Instead of directly accessing the text attribute, the code should check the type of the content item and handle it accordingly. For example, it could check if the item is a TextAssistantMessageResponseContentItem before attempting to access the text attribute.

Here's a possible modification to the code:

if response.message.content:
    content_text = "".join(
        item.text if isinstance(item, TextAssistantMessageResponseContentItem) else "" for item in response.message.content
    )
else:
    content_text = ""

This modification iterates through the response.message.content list and checks if each item is an instance of TextAssistantMessageResponseContentItem. If it is, it appends the text attribute to the content_text string. Otherwise, it appends an empty string. This ensures that the code does not try to access the text attribute on ThinkingAssistantMessageResponseContentItem instances, preventing the AttributeError.

Implementing the Fix

To implement this fix, you would need to modify the _generate function in the langchain_cohere/chat_models.py file. Here are the steps to do this:

  1. Locate the langchain_cohere/chat_models.py file in your Langchain installation.
  2. Open the file in a text editor.
  3. Find the _generate function within the ChatCohere class.
  4. Replace the line that causes the error with the modified code provided above.
  5. Save the file.

After making these changes, the AttributeError should be resolved, and you should be able to use custom models with the "Thinking" feature without encountering this issue.

Conclusion

This AttributeError highlights the importance of handling different types of content items in the Langchain-Cohere integration. By modifying the _generate function to handle ThinkingAssistantMessageResponseContentItem instances gracefully, we can resolve the issue and enable the use of custom models with the "Thinking" feature. I hope this helps anyone else encountering the same problem!

In summary, the key takeaway is to ensure your code anticipates various response types from the Cohere API, especially when custom models introduce features like 'Thinking'. A little defensive programming can save a lot of headaches!