Skip to content

XPOZpublic/xpoz-python-sdk

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Xpoz Python SDK

PyPI version

Python SDK for the Xpoz social media intelligence platform. Query Twitter/X, Instagram, and Reddit data through a simple, Pythonic interface.

Installation

pip install xpoz

Requires Python 3.10+.

Get an API Key

Sign up and get your token at https://xpoz.ai/get-token.

Once you have it, pass it directly or set the XPOZ_API_KEY environment variable:

export XPOZ_API_KEY=your-token-here

What is Xpoz?

Xpoz provides unified access to social media data across Twitter/X, Instagram, and Reddit. The platform indexes billions of posts, user profiles, and engagement metrics — making it possible to search, analyze, and export social media data at scale.

The SDK wraps Xpoz's MCP server, abstracting away transport, authentication, operation polling, and pagination into a clean developer-friendly API.

Features

  • 30 data methods across Twitter, Instagram, and Reddit
  • Sync and async clientsXpozClient and AsyncXpozClient
  • Automatic operation polling — long-running queries are abstracted away
  • Server-side paginationPaginatedResult with next_page(), get_page(n)
  • CSV exportexport_csv() on any paginated result
  • Field selection — request only the fields you need in Pythonic snake_case
  • Pydantic v2 models — fully typed results with autocomplete support
  • Namespaced APIclient.twitter.*, client.instagram.*, client.reddit.*

Quick Start

from xpoz import XpozClient

client = XpozClient("your-api-key")

user = client.twitter.get_user("elonmusk")
print(f"{user.name}{user.followers_count:,} followers")

results = client.twitter.search_posts("artificial intelligence", start_date="2025-01-01")
for tweet in results.data:
    print(tweet.text, tweet.like_count)

client.close()

Authentication

Get your API key at https://xpoz.ai/get-token, then use it as follows:

# Pass API key directly
client = XpozClient("your-api-key")

# Or use XPOZ_API_KEY environment variable
import os
os.environ["XPOZ_API_KEY"] = "your-api-key"
client = XpozClient()

# Custom server URL (also reads XPOZ_SERVER_URL env var)
client = XpozClient("your-api-key", server_url="https://xpoz.ai/mcp")

# Custom operation timeout (default: 300 seconds)
client = XpozClient("your-api-key", timeout=600)

Context Manager

# Sync — auto-closes on exit
with XpozClient("your-api-key") as client:
    user = client.twitter.get_user("elonmusk")

# Async
import asyncio
from xpoz import AsyncXpozClient

async def main():
    async with AsyncXpozClient("your-api-key") as client:
        user = await client.twitter.get_user("elonmusk")
        results = await client.twitter.search_posts("AI")
        page2 = await results.next_page()

asyncio.run(main())

Pagination

Methods that return large datasets use server-side pagination (100 items per page). These return a PaginatedResult[T] with built-in helpers:

results = client.twitter.search_posts("AI")

results.data                       # list[TwitterPost] — current page
results.pagination.total_rows      # total matching rows
results.pagination.total_pages     # total pages
results.pagination.page_number     # current page number
results.pagination.page_size       # items per page (100)
results.pagination.results_count   # items on current page
results.has_next_page()            # bool

# Navigate pages
page2 = results.next_page()        # fetch next page
page5 = results.get_page(5)        # jump to specific page

# Export to CSV
csv_url = results.export_csv()     # returns download URL

Field Selection

All methods accept a fields parameter. Use snake_case — the SDK translates to camelCase automatically.

# Only fetch the fields you need (faster + less memory)
results = client.twitter.search_posts(
    "AI",
    fields=["id", "text", "like_count", "retweet_count", "created_at_date"]
)

user = client.twitter.get_user(
    "elonmusk",
    fields=["id", "username", "name", "followers_count", "description"]
)

Requesting fewer fields significantly improves response time.

Query Syntax

The query parameter on all search_* and get_*_by_keywords methods supports a Lucene-style full-text syntax across Twitter, Instagram, and Reddit.

Exact phrase

Wrap in double quotes to require an exact match:

"machine learning"
"climate change"

Keywords (any word)

Space-separated terms without quotes match posts containing any of the words:

AI crypto blockchain

Boolean operators

Use AND, OR, NOT (case-insensitive). A bare space is treated as OR — be explicit:

"deep learning" AND python
tensorflow OR pytorch
climate NOT politics

Grouping with parentheses

(AI OR "artificial intelligence") AND ethics
(startup OR entrepreneur) NOT "venture capital"

Combined example

results = client.twitter.search_posts(
    '("machine learning" OR "deep learning") AND python NOT spam',
    start_date="2025-01-01",
    language="en",
)

Note: Do not use from:, lang:, since:, or until: in the query string — use the dedicated parameters (author_username, language, start_date, end_date) instead.

Error Handling

from xpoz import (
    XpozError,
    AuthenticationError,
    ConnectionError,
    OperationTimeoutError,
    OperationFailedError,
    OperationCancelledError,
    NotFoundError,
    ValidationError,
)

try:
    user = client.twitter.get_user("nonexistent_user_12345")
except OperationFailedError as e:
    print(f"Operation {e.operation_id} failed: {e.error}")
except OperationTimeoutError as e:
    print(f"Timed out after {e.elapsed_seconds}s")
except AuthenticationError:
    print("Invalid API key")
except XpozError as e:
    print(f"Xpoz error: {e}")

API Reference

Twitter — client.twitter

get_user(identifier, identifier_type="username", *, fields) -> TwitterUser

Get a single Twitter user profile.

# By username (default)
user = client.twitter.get_user("elonmusk")

# By numeric ID
user = client.twitter.get_user("44196397", identifier_type="id")

search_users(name, *, limit=None, fields) -> list[TwitterUser]

Search users by name or username. Returns up to 10 results.

users = client.twitter.search_users("elon")

get_user_connections(username, connection_type, *, fields, force_latest) -> PaginatedResult[TwitterUser]

Get followers or following for a user.

followers = client.twitter.get_user_connections("elonmusk", "followers")
following = client.twitter.get_user_connections("elonmusk", "following")

get_users_by_keywords(query, *, fields, start_date, end_date, language, force_latest) -> PaginatedResult[TwitterUser]

Find users who authored posts matching a keyword query. Includes aggregation fields like relevant_tweets_count, relevant_tweets_likes_sum.

users = client.twitter.get_users_by_keywords(
    '"machine learning"',
    fields=["username", "name", "followers_count", "relevant_tweets_count", "relevant_tweets_likes_sum"]
)

get_posts_by_ids(post_ids, *, fields, force_latest) -> list[TwitterPost]

Get 1-100 posts by their IDs.

tweets = client.twitter.get_posts_by_ids(["1234567890", "0987654321"])

get_posts_by_author(identifier, identifier_type="username", *, fields, start_date, end_date, force_latest) -> PaginatedResult[TwitterPost]

Get all posts by an author with optional date filtering.

results = client.twitter.get_posts_by_author("elonmusk", start_date="2025-01-01")

search_posts(query, *, fields, start_date, end_date, author_username, author_id, language, force_latest) -> PaginatedResult[TwitterPost]

Full-text search with filters. Supports exact phrases ("machine learning"), boolean operators (AI AND python), and parentheses.

results = client.twitter.search_posts(
    '"artificial intelligence" AND ethics',
    start_date="2025-01-01",
    end_date="2025-06-01",
    language="en",
    fields=["id", "text", "like_count", "author_username", "created_at_date"]
)

get_retweets(post_id, *, fields, start_date) -> PaginatedResult[TwitterPost]

Get retweets of a specific post (database only).

retweets = client.twitter.get_retweets("1234567890")

get_quotes(post_id, *, fields, start_date, force_latest) -> PaginatedResult[TwitterPost]

Get quote tweets of a specific post.

quotes = client.twitter.get_quotes("1234567890")

get_comments(post_id, *, fields, start_date, force_latest) -> PaginatedResult[TwitterPost]

Get replies to a specific post.

comments = client.twitter.get_comments("1234567890")

get_post_interacting_users(post_id, interaction_type, *, fields, force_latest) -> PaginatedResult[TwitterUser]

Get users who interacted with a post. interaction_type: "commenters", "quoters", "retweeters".

commenters = client.twitter.get_post_interacting_users("1234567890", "commenters")

count_posts(phrase, *, start_date, end_date) -> int

Count tweets containing a phrase within a date range.

count = client.twitter.count_posts("bitcoin", start_date="2025-01-01")
print(f"{count:,} tweets mention bitcoin")

Instagram — client.instagram

get_user(identifier, identifier_type="username", *, fields) -> InstagramUser

user = client.instagram.get_user("instagram")
print(f"{user.full_name}{user.follower_count:,} followers")

search_users(name, *, limit=None, fields) -> list[InstagramUser]

users = client.instagram.search_users("nasa")

get_user_connections(username, connection_type, *, fields, force_latest) -> PaginatedResult[InstagramUser]

followers = client.instagram.get_user_connections("instagram", "followers")

get_users_by_keywords(query, *, fields, start_date, end_date, force_latest) -> PaginatedResult[InstagramUser]

users = client.instagram.get_users_by_keywords('"sustainable fashion"')

get_posts_by_ids(post_ids, *, fields, force_latest) -> list[InstagramPost]

Post IDs must be in strong_id format: "media_id_user_id" (e.g. "3606450040306139062_4836333238").

posts = client.instagram.get_posts_by_ids(["3606450040306139062_4836333238"])

get_posts_by_user(identifier, identifier_type="username", *, fields, start_date, end_date, force_latest) -> PaginatedResult[InstagramPost]

results = client.instagram.get_posts_by_user("nasa")

search_posts(query, *, fields, start_date, end_date, force_latest) -> PaginatedResult[InstagramPost]

results = client.instagram.search_posts("travel photography")

get_comments(post_id, *, fields, start_date, end_date, force_latest) -> PaginatedResult[InstagramComment]

comments = client.instagram.get_comments("3606450040306139062_4836333238")

get_post_interacting_users(post_id, interaction_type, *, fields, force_latest) -> PaginatedResult[InstagramUser]

interaction_type: "commenters", "likers".

likers = client.instagram.get_post_interacting_users("3606450040306139062_4836333238", "likers")

Reddit — client.reddit

get_user(username, *, fields) -> RedditUser

user = client.reddit.get_user("spez")
print(f"{user.username}{user.total_karma:,} karma")

search_users(name, *, limit=None, fields) -> list[RedditUser]

users = client.reddit.search_users("spez")

get_users_by_keywords(query, *, fields, start_date, end_date, subreddit, force_latest) -> PaginatedResult[RedditUser]

users = client.reddit.get_users_by_keywords('"machine learning"', subreddit="MachineLearning")

search_posts(query, *, fields, start_date, end_date, sort, time, subreddit, force_latest) -> PaginatedResult[RedditPost]

sort: "relevance", "hot", "top", "new", "comments". time: "hour", "day", "week", "month", "year", "all".

results = client.reddit.search_posts(
    "python tutorial",
    subreddit="learnpython",
    sort="top",
    time="month"
)

get_post_with_comments(post_id, *, post_fields, comment_fields, force_latest) -> RedditPostWithComments

Returns a composite object with the post and its paginated comments.

result = client.reddit.get_post_with_comments("abc123")
print(result.post.title)
for comment in result.comments:
    print(f"  {comment.author_username}: {comment.body[:80]}")

search_comments(query, *, fields, start_date, end_date, subreddit) -> PaginatedResult[RedditComment]

comments = client.reddit.search_comments("helpful tip", subreddit="LifeProTips")

search_subreddits(query, *, limit=None, fields) -> list[RedditSubreddit]

subs = client.reddit.search_subreddits("machine learning")

get_subreddit_with_posts(subreddit_name, *, subreddit_fields, post_fields, force_latest) -> SubredditWithPosts

result = client.reddit.get_subreddit_with_posts("wallstreetbets")
print(f"r/{result.subreddit.display_name}{result.subreddit.subscribers_count:,} members")
for post in result.posts:
    print(f"  {post.title} ({post.score} points)")

get_subreddits_by_keywords(query, *, fields, start_date, end_date, force_latest) -> PaginatedResult[RedditSubreddit]

subs = client.reddit.get_subreddits_by_keywords("cryptocurrency")

Type Models

All models are Pydantic v2 BaseModel subclasses with extra="allow" (unknown fields are preserved, not rejected). All fields are optional and default to None.

TwitterPost

Field Type Description
id str Post ID
text str Post text content
author_id str Author's user ID
author_username str Author's username
like_count int Number of likes
retweet_count int Number of retweets
reply_count int Number of replies
quote_count int Number of quotes
impression_count int Number of impressions
bookmark_count int Number of bookmarks
lang str Language code
hashtags list[str] Hashtags in tweet
mentions list[str] Mentioned usernames
media_urls list[str] Media attachment URLs
urls list[str] URLs in tweet
country str Country (if geo-tagged)
created_at str Creation timestamp
created_at_date str Creation date (YYYY-MM-DD)
conversation_id str Thread conversation ID
quoted_tweet_id str ID of quoted tweet
reply_to_tweet_id str ID of parent tweet
is_retweet bool Whether this is a retweet
possibly_sensitive bool Sensitive content flag

TwitterUser

Field Type Description
id str User ID
username str Username (handle)
name str Display name
description str Bio text
location str Location string
verified bool Verification status
verified_type str Verification type
followers_count int Number of followers
following_count int Number of following
tweet_count int Total tweets
likes_count int Total likes
profile_image_url str Profile picture URL
created_at str Account creation timestamp
account_based_in str Account location
is_inauthentic bool Inauthenticity flag
is_inauthentic_prob_score float Inauthenticity probability
avg_tweets_per_day_last_month float Tweeting frequency

InstagramPost

Field Type Description
id str Post ID (strong_id format)
caption str Post caption
username str Author username
full_name str Author display name
like_count int Number of likes
comment_count int Number of comments
reshare_count int Number of reshares
video_play_count int Video play count
media_type str Media type
image_url str Image URL
video_url str Video URL
created_at_date str Creation date

InstagramUser

Field Type Description
id str User ID
username str Username
full_name str Display name
biography str Bio text
is_private bool Private account
is_verified bool Verified status
follower_count int Followers
following_count int Following
media_count int Total posts
profile_pic_url str Profile picture URL

InstagramComment

Field Type Description
id str Comment ID
text str Comment text
username str Author username
parent_post_id str Parent post ID
like_count int Number of likes
child_comment_count int Reply count
created_at_date str Creation date

RedditPost

Field Type Description
id str Post ID
title str Post title
selftext str Post body text
author_username str Author username
subreddit_name str Subreddit name
score int Net score
upvotes int Upvote count
comments_count int Comment count
url str Post URL
permalink str Reddit permalink
is_self bool Self post (text only)
over18 bool NSFW flag
created_at_date str Creation date

RedditUser

Field Type Description
id str User ID
username str Username
total_karma int Total karma
link_karma int Link karma
comment_karma int Comment karma
is_gold bool Reddit Gold status
is_mod bool Moderator status
profile_description str Profile bio
created_at_date str Account creation date

RedditComment

Field Type Description
id str Comment ID
body str Comment text
author_username str Author username
parent_post_id str Parent post ID
score int Net score
depth int Nesting depth
is_submitter bool Is OP
created_at_date str Creation date

RedditSubreddit

Field Type Description
id str Subreddit ID
display_name str Subreddit name
title str Subreddit title
public_description str Short description
description str Full description
subscribers_count int Subscriber count
active_user_count int Active users
over18 bool NSFW flag
created_at_date str Creation date

Composite Types

RedditPostWithComments — returned by get_post_with_comments():

  • post: RedditPost
  • comments: list[RedditComment]
  • comments_pagination: PaginationInfo | None

SubredditWithPosts — returned by get_subreddit_with_posts():

  • subreddit: RedditSubreddit
  • posts: list[RedditPost]
  • posts_pagination: PaginationInfo | None

Environment Variables

Variable Description Default
XPOZ_API_KEY API key for authentication
XPOZ_SERVER_URL MCP server URL https://mcp.xpoz.ai/mcp

Testing

Tests hit the live Xpoz API and require a valid API key:

XPOZ_API_KEY=your-api-key pytest tests/ -v

Tests must run sequentially in a single process to avoid API rate limiting. Do not run multiple pytest processes in parallel.

License

MIT

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages