
How to Create an AI-Powered Bot that Can Post on Twitter/X
How to Create an AI-Powered Bot that Can Post on Twitter/X 관련
These days, everyone wants to be a content creator. But it can be hard to find time to create and curate content, post on social media, build engagement, and grow your brand.
And I’m not an exception to this. I wanted to create more content, and had an idea based on something I’ve observed. I subscribe to a few technology newsletters, and I read lots of updates every day about the tech ecosystem. But I’ve noticed that many of my peers often don’t seem to be aware of this news. So, I decided to post my top three news stories (especially about AI) on my Twitter/X account every day.
I did this for a couple of weeks, but after that I couldn’t find the time to keep it going. So, I did some research into how I could automate the process, and I found a solution. In this guide, I’ll explain the process so you can use it, too.
By the end of this tutorial, you’ll have created your own AI bot that:
- Fetches data from an API or crawls a webpage
- Processes the data using AI
- Posts the results on Twitter/X
And the great thing: this entire process is automated.
Prerequisites
Before we begin creating a bot, you’ll need to have the following setup and tools ready to go:
- NodeJS - A simple NodeJS app to code the bot
You’ll also need some API keys, secrets, and tokens. So, you’ll need to have the following accounts created:
- Twitter Developer – To generate the Twitter/X API keys, secrets, and tokens
- Google AI Studio – To generate the Gemini API key
How to Build the Bot
There are a number of steps I’ll walk you through to build your bot.
We’ll start by generating an API Key and Secret so we can use the Twitter/X API. Then we’ll generate an access token and access token secret with “Read and Write” permissions that’ll be able to post in your account. After that we’ll generate an API Key in Google Gemini (we’ll be using the Gemini API to process the data).
With all that taken care of, we’ll start working on the Node.js app. The app will be able to fetch data from an API, process the data using AI, and then post that data in the form of tweets on Twitter/X.
Finally, we’ll automate the entire process and schedule it to run daily.
Step 1: Generate the Twitter API Key
Navigate to Twitter Developer Website.
Click on the “Developer Portal” in the top right:

Signup using your account.
You’ll be asked to fill out a form asking how will you use the Twitter API, and a few basic details. It may take up to 24 hours to get approved. But, it’s approved instantly for me.

After login, Navigate to "Projects and Apps" and under “Overview” click on "Create App":

Enter a name for your app and click “Next” to proceed with creating your app. At the end, you’ll be shown your API Key and Secret. Don’t copy that now.
Click on the project you created from the left side drawer and click on the "Edit" option in “User authentication settings” section.

Select “Read and Write” in App Permissions section, “Web App, Automated App or Bot” in Type of App section, and enter your website URL (it can be any URL including http://localhost) in the “Callback URI” and “Website URL”. Then hit “Save”.
Go to “Keys and tokens” tab.
Click on “Regenerate” button in “API Key and Secret” section.
Copy and save the API Key and Secret somewhere securely.
Step 2: Generate Access Token and Secret
Go to “Keys and tokens” tab.
Click on “Generate” or “Regenerate” button in “Access Token and Secret” section.
Copy and save the Access Token and Secret somewhere securely.

Step 3: Generate an API Key in Google Gemini
Navigate to Google AI Studio.
Login to your account.
Click on “Get API Key” button at the top right.
Click on “Create API Key” button.

Copy and save the API Key somewhere securely.
Alright, we are done with creating the necessary API Keys and Secrets for our project. Let’s put on our coding shoes.
Node.js Project Setup
There are 5 major steps for this part of the project. They are:
- Fetch data from the API
- Upload the data as a file to Gemini API
- Prompt Gemini with the uploaded file to get the latest AI news
- Post news to Twitter/X using their API
- Delete the file uploaded in Gemini API
These are just the snippets of code that can be assembled together to run this project.
Step 1: Fetch Data from the API
In my case, I’ll be using techmeme.com
to get the latest news. But this site does not offer an API. So, I’ll be downloading the HTML of this site.
async function fetchHtml(url) {
console.log(`Fetching HTML from ${url}...`);
try {
const response = await axios.get(url, {
headers: {
"User-Agent":
"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
},
});
if (response.status !== 200) {
throw new Error(`Failed to fetch HTML: Status code ${response.status}`);
}
console.log("HTML fetched successfully.");
return response.data;
} catch (error) {
console.error(`Error fetching HTML from ${url}:`, error.message);
throw error;
}
}
In the User-Agent
header, we pass the value that mimics a browser user agent to avoid potential blocks.
Step 2: Upload the Data as a File to Gemini API
Now we need to store this HTML in a separate file. We cannot directly pass the HTML code in the prompt to the Gemini API, as it’ll result in an error. This is because Gemini accepts only a limited number of tokens in this API. The HTML code of any website will always result in huge number of tokens. So, we’ll create a separate file.
Upload the file to the Gemini API. Refer to the file id in the prompt to Gemini.
async function uploadHtmlToGemini(htmlContent, filename = "techmeme.html") {
console.log(`Uploading ${filename} to Gemini File API...`);
const tempFilePath = path.join(os.tmpdir(), filename);
try {
await fs.writeFile(tempFilePath, htmlContent);
console.log(`HTML saved temporarily to ${tempFilePath}`);
const uploadResult = await ai.files.upload({
file: tempFilePath,
config: {
mimeType: " text/html",
displayName: filename,
},
});
fileName = uploadResult.name;
console.log(
`File uploaded successfully. File Name: ${fileName}, URI: ${uploadResult.uri}`
);
await fs.unlink(tempFilePath);
console.log(`Temporary file ${tempFilePath} deleted.`);
return uploadResult;
} catch (error) {
console.error("Error during file upload to Gemini:", error);
try {
await fs.access(tempFilePath);
await fs.unlink(tempFilePath);
console.log(`Temporary file ${tempFilePath} cleaned up after error.`);
} catch (cleanupError) {
console.warn(
`Could not cleanup temporary file ${tempFilePath}: ${cleanupError.message}`
);
}
throw new Error(
`Failed to upload file to Gemini File API: ${error.message}`
);
}
}
Step 3: Prompt Gemini to Get the Latest AI News
Let’s write a prompt to Gemini asking it to generate top news by referring to the HTML file provided. We’ll ask it to provide a headline, short description, URL, and three relevant hashtags for each tweet. We’ll also give some example data of how it should look. We’ll ask it to generate a structured response by providing the format of the JSON that we want the output to be.
You can use whatever model you want to, but I’ll be using the gemini-2.5-pro-exp-03-25
model for this use case. I’m using this model because we need a thinking model that thinks and picks the correct top news – not just one that predicts the next token/word. The Gemini 2.5 Pro model best qualifies for this.
async function extractAiNewsWithGemini(uploadedFile) {
console.log("Sending HTML to Gemini for AI news extraction...");
try {
const prompt = `
Analyze the content of the provided HTML file (${uploadedFile.displayName}), which contains the Techmeme homepage. Identify the top 3 news headlines specifically related to Artificial Intelligence (AI), Machine Learning (ML), Large Language Models (LLMs), Generative AI, or significant AI company news (like OpenAI, Anthropic, Google AI, Meta AI, etc.).
For each of the top 3 AI news items, provide:
1. The main headline text (title) not exceeding 60 characters.
2. A short description (not exceeding 120 characters) summarizing the news.
3. The direct URL (link) associated with that headline on Techmeme. Link to the news article source.
4. A list of 1-3 relevant hashtags (e.g., #Google, #Gemini, #LLM, #Llama, #Funding, #Research, #OpenAI, #ChatGPT, #Claude, #Sonnet, #Microsoft, #Techcrunch, #Bloomberg). Don't include #AI or #ArtificialIntelligence hashtags. If possible, include one hashtag of the publisher.
Return the result ONLY as a JSON array of objects, where each object has the keys "title", "short_description", "link", and "hashtags" (which is an array of strings). Do not include any explanations around the JSON. However, you may include emojis.
Also, give me a short content before these lines to start with and give me a short content asking the user to follow to receive more content at the end.
Here's an example:
Intro:
From big price tags to free college perks, the AI world isn’t slowing down. Here are today’s top 3 stories you should know:
Top 3 AI News:
1️⃣ Google's Gemini 2.5 Pro Comes with a Premium Price Tag 💰
Google reveals pricing for Gemini 2.5 Pro—its most expensive model yet—at $1.25 per million input tokens and $10 per million output tokens.
(Source: TechCrunch – Maxwell Zeff)
Because what's cutting-edge AI without a price that cuts deep?
2️⃣ OpenAI Gives ChatGPT Plus to College Students for Free 🎓
College students in the US and Canada can now access ChatGPT Plus for free until May 2025, in a clear jab at Anthropic’s campus push.
(Source: VentureBeat – Michael Nuñez)
Nothing says “future of education” like AI doing your homework—for free.
3️⃣ Midjourney V7 Enters Alpha With a Whole New Brain 🧠
Midjourney launches V7 in alpha, its first major model update in nearly a year, built on a “totally different architecture.”
(Source: TechCrunch – Kyle Wiggers)
Just when you mastered prompts, they dropped a new engine like it’s Fast & Furious: AI Drift.
Outro:
That’s a wrap on today’s AI buzz. Follow for more quick updates—minus the fluff. ⚡
`;
const response = await ai.models.generateContent({
model: "models/gemini-2.5-pro-exp-03-25",
contents: [
createUserContent([
prompt,
createPartFromUri(uploadedFile.uri, uploadedFile.mimeType),
]),
],
config: {
responseMimeType: "application/json",
responseSchema: {
type: Type.OBJECT,
properties: {
intro: {
type: Type.STRING,
description: "Introduction to the post",
nullable: false,
},
news_items: {
type: Type.ARRAY,
items: {
type: Type.OBJECT,
properties: {
title: {
type: Type.STRING,
description:
"Title of the news not exceeding 60 characters",
nullable: false,
},
short_description: {
type: Type.STRING,
description:
"Short description of the news not exceeding 120 characters",
nullable: false,
},
link: {
type: Type.STRING,
description: "Link to the news article",
nullable: false,
},
hashtags: {
type: Type.ARRAY,
items: {
type: Type.STRING,
description:
"Hashtags related to the news. Don't include #AI or #ArtificialIntelligence hashtags",
nullable: false,
},
minItems: 1,
maxItems: 3,
},
},
required: ["title", "link", "hashtags"],
},
},
outro: {
type: Type.STRING,
description: "Conclusion of the post",
nullable: false,
},
},
},
},
});
const text = response.text;
console.log("Gemini response received", text);
let cleanedText = text.trim();
if (cleanedText.startsWith("```json")) {
cleanedText = cleanedText.substring(7);
}
if (cleanedText.endsWith("```")) {
cleanedText = cleanedText.substring(0, cleanedText.length - 3);
}
cleanedText = cleanedText.trim();
let aiNews;
try {
aiNews = JSON.parse(cleanedText);
} catch (parseError) {
console.error("Error parsing JSON response from Gemini:", parseError);
console.error("Raw Gemini response text:", text);
throw new Error("Failed to parse structured data from Gemini.");
}
if (!aiNews.intro || !aiNews.outro || !Array.isArray(aiNews.news_items)) {
throw new Error(
"Gemini response does not contain the expected structure."
);
}
aiNews.news_items.forEach((item, index) => {
if (
!item.title ||
!item.link ||
!item.hashtags ||
!Array.isArray(item.hashtags)
) {
console.warn(
`News item at index ${index} has missing or invalid fields:`,
item
);
}
if (item.link && item.link.startsWith("/")) {
item.link = `https://techmeme.com${item.link}`;
}
const hashTags = item.hashtags?.map((hashtag) => {
if (hashtag.startsWith("#")) {
return hashtag;
}
return `#${hashtag}`;
}) || [];
item.hashtags = hashTags;
});
console.log(`Extracted ${aiNews.length} AI news items.`);
const newsItems = aiNews.news_items;
newsItems.slice(0, 3);
aiNews.news_items = newsItems;
return aiNews;
} catch (error) {
console.error("Error interacting with Gemini API:", error.message);
if (error.response) {
console.error("Gemini Error Details:", error.response);
}
throw error;
}
}
Step 4: Post Using the Twitter/X API
Here’s the core of our app. We need to post all the tweets we received from Gemini. We’ll be posting the tweet as a thread. This means that the first tweet will be the root tweet and subsequent tweets will be in the comments of the prior tweet. This makes it a thread.
To do this, we’ll take the id of each tweet after it’s posted and pass it on to the next tweet as a reference. One additional thing to note here is, after each successful tweet, we’ll give a pause of 5 seconds before posting the next tweet. There are few reasons for doing it this way.
- When any script runs, it usually runs at a much higher speed (usually in milliseconds). So, the second tweet may get posted before the first tweet was posted (maybe due to some poor internet connection). Also, I believe Twitter implements some queue system which may quickly process the second tweet before your first. So it’s always better to leave a small gap – if not 5 seconds then at least 1 second
- Twitter may have implemented some rate limiting mechanism. So if there are multiple request received from a same IP within a short time frame, they may block the IP and consider your account as spam.
- Since we’re using a Free tier API, we are limited to 1500 tweets per month. If you’ve paid for this API, you won’t have to worry about this (since you’ll have a higher limit and the rate limiting mechanism –refer to point #2 – might not be applicable). All of this depends on their pricing, so just refer to that and make your call accordingly.
I’m using the free tier, and since it’s a hobby project, having a 5 seconds wait time makes sense. I have not faced any issues so far with this.
async function postNewsToTwitter(aiNews) {
if (!aiNews || aiNews.news_items.length === 0) {
console.log("No news items to post.");
return;
}
console.log(
`Posting ${aiNews.news_items.length} news items to Twitter as a thread...`
);
let previousTweetId = null;
let tweetText = aiNews.intro;
tweetText += `\n\n#AI #ArtificialIntelligence #MachineLearning #LLM #GenerativeAI #OpenAI #Anthropic #GoogleAI #MetaAI #Gemini #Techmeme`;
let postOptions = { text: tweetText };
const { data: createdTweet } = await twitterUserClient.v2.tweet(postOptions);
console.log(`Intro tweet posted successfully! ID: ${createdTweet.id}`);
previousTweetId = createdTweet.id;
await new Promise((resolve) => setTimeout(resolve, 5000));
let hasError = false;
for (let i = 0; i < aiNews.news_items.length; i++) {
const item = aiNews.news_items[i];
const hashtagString = item.hashtags.join(" ");
let tweetText = `${item.title}\n\n${item.short_description}\n\n${item.link}\n\n${hashtagString}`;
if (aiNews.news_items.length > 1) {
// tweetText += `\n\n(${i + 1}/${aiNews.news_items.length})`;
}
if (tweetText.length > 280) {
console.warn(
`Tweet ${i + 1} might be too long (${
tweetText.length
} chars), attempting to post anyway...`
);
}
try {
console.log(`Posting tweet ${i + 1}: ${item.title}`);
const postOptions = { text: tweetText };
if (previousTweetId) {
postOptions.reply = { in_reply_to_tweet_id: previousTweetId };
}
const { data: createdTweet } = await twitterUserClient.v2.tweet(
postOptions
);
console.log(`Tweet ${i + 1} posted successfully! ID: ${createdTweet.id}`);
previousTweetId = createdTweet.id;
if (i < aiNews.news_items.length - 1) {
await new Promise((resolve) => setTimeout(resolve, 5000));
}
} catch (error) {
hasError = true;
console.error(`Error posting tweet ${i + 1}:`, error.message || error);
if (error.data) {
console.error("Twitter API Error Details:", error.data);
}
// if (i === 0) {
// console.error("Aborting further posts due to error.");
// break;
// }
}
}
if (!hasError) {
await new Promise((resolve) => setTimeout(resolve, 5000));
let tweetText = aiNews.outro;
tweetText += `\n\n@AI_Techie_Arun`;
postOptions = { text: tweetText };
if (previousTweetId) {
postOptions.reply = { in_reply_to_tweet_id: previousTweetId };
}
const { data: createdTweet } = await twitterUserClient.v2.tweet(
postOptions
);
console.log(`Outro tweet posted successfully! ID: ${createdTweet.id}`);
previousTweetId = createdTweet.id;
}
console.log("Finished posting thread.");
}
Step 5: Delete the File Uploaded in the Gemini API
After posting all the tweets, it’s time to clean up the system. The only thing we need to do as a clean up is delete the uploaded file. It’s always a best practice to remove an unused file that’s no longer needed. And since we’ve already posted the tweets, we no longer need that file. So, we’ll be deleting it in this step.
async function deleteFile(fileName) {
if (fileName === "") {
console.log("File not uploaded. Skipping deletion.");
return;
}
try {
console.log(`Deleting file ${fileName}...`);
await ai.files.delete({
name: fileName,
});
console.log(`File ${fileName} deleted successfully.`);
} catch (error) {
console.error("Error deleting file:", error.message);
}
}
That’s it. We’re all done. You just need to copy these blocks of code into an index.js
file and install some dependencies into the project and you should be good to go.
To make this even more simple, I have created a repo and made it public. Here’s the Github repo URL (arunachalam-b/existential-crisis-alert-bot
). You just need to clone the repo, install the dependencies, and run the post
command
git clone https://github.com/arunachalam-b/existential-crisis-alert-bot.git
cd existential-crisis-alert-bot
npm i
Create a .env
file and update your API keys and secrets in that file:
GEMINI_API_KEY=
TWITTER_API_KEY=
TWITTER_API_SECRET=
TWITTER_ACCESS_TOKEN=
TWITTER_ACCESS_TOKEN_SECRET=
Run the following command to post the latest AI news to your account:
npm run post
The Result

You can modify the code/prompt to fetch data from a different API and post the top results in your Twitter account.
Conclusion
I hope you now understand how you can automate a slightly complex process using AI and some APIs. Just note that this example is not completely automated. You still have to manually run the command everyday to post the tweets.
But you can automate that process as well. Just drop me a message if you wish to know about that. That topic itself deserves to be a separate tutorial. Also, I would request that you give a star for my project if you enjoyed this tutorial.
Meanwhile, you can follow my Twitter/X account (AI_Techie_Arun
) to receive the top AI news everyday. If you wish to learn more about automation, subscribe to my email newsletter and follow me on social media.