Extract Tweets for Free: Easy Python Guide Using Twikit (2025)

Twitter data extraction illustration

Want to collect tweets for your project but don't want to pay for Twitter's API? 🐦 You're not alone!

Twitter (now X) started charging for their API in 2023, but there's good news - you can still extract tweets for free using Python and a special library called Twikit!

In this beginner-friendly guide, I'll show you exactly how to:

  • Set up everything you need (even if you're new to Python)
  • Search for tweets on any topic
  • Save tweets to analyze later
  • All without spending a penny! πŸ’°

Quick Summary: Install Twikit β†’ Set up authentication β†’ Search for tweets β†’ Save data to CSV. Total time: ~15 minutes.


What You'll Need

Before we start, make sure you have:

  • Python installed on your computer (download here if you don't have it)
  • A Twitter/X account (free to create)
  • Basic knowledge of running commands (I'll explain everything step by step)

Step 1: Install the Required Tools πŸ› οΈ

First, we need to install two Python libraries:

  • Twikit: Helps us connect to Twitter and extract tweets
  • Pandas: Makes it easy to work with and save the tweet data

Open your terminal (Command Prompt on Windows) and run this command:

pip install twikit pandas

What this does: This command tells Python to download and install the tools we need. You should see some progress text as they install.


Step 2: Set Up Your Twitter Login πŸ”‘

Twikit needs to log into Twitter to extract tweets. The safest way to handle your login information is to use environment variables (temporary settings that only exist during your current session).

Run these commands in your terminal, replacing the values with your actual Twitter information:

# For Windows Command Prompt:
set TWITTER_USERNAME="your_twitter_username"
set TWITTER_EMAIL="your_twitter_email"
set TWITTER_PASSWORD="your_twitter_password"

# For Mac/Linux Terminal:
export TWITTER_USERNAME="your_twitter_username"
export TWITTER_EMAIL="your_twitter_email"
export TWITTER_PASSWORD="your_twitter_password"

Security Tip: This method keeps your password out of your code. The environment variables will disappear when you close your terminal.


Step 3: Create Your Tweet Extractor Script πŸ“

Now let's create a Python script that will:

  1. Log into Twitter
  2. Search for tweets
  3. Save the results

Create a new file called tweet_extractor.py and copy this code into it:

import asyncio
from twikit import Client
import os
import pandas as pd

# Create a Twitter client
client = Client("en-US")

# Step 1: Log into Twitter
async def authenticate():
    cookies_file = "twitter_cookies.json"
    
    # Try using saved cookies first (faster login)
    if os.path.exists(cookies_file):
        try:
            client.load_cookies(cookies_file)
            await client.user_id()
            print("βœ… Logged in using saved cookies!")
            return True
        except Exception:
            print("⚠️ Saved cookies expired, logging in again...")

    # Get login info from environment variables
    username = os.getenv("TWITTER_USERNAME")
    email = os.getenv("TWITTER_EMAIL")
    password = os.getenv("TWITTER_PASSWORD")
    
    # Log in with username/password
    try:
        await client.login(auth_info_1=username, auth_info_2=email, password=password)
        client.save_cookies(cookies_file)
        print("βœ… Successfully logged in!")
        return True
    except Exception as e:
        print(f"❌ Login failed: {e}")
        return False

# Step 2: Search for tweets
async def search_tweets(keyword, max_tweets=100):
    print(f"πŸ” Searching for tweets about '{keyword}'...")
    
    # Get tweets (newest first)
    tweets = await client.search_tweet(keyword, "Latest")
    results = []
    
    # Extract the information we want from each tweet
    count = 0
    for tweet in tweets:
        if count >= max_tweets:
            break
            
        results.append({
            "text": tweet.text,
            "user": tweet.user.screen_name,
            "date": tweet.created_at,
            "likes": tweet.like_count,
            "retweets": tweet.retweet_count,
            "replies": tweet.reply_count,
            "url": f"https://twitter.com/{tweet.user.screen_name}/status/{tweet.id}"
        })
        count += 1
        
    print(f"βœ… Found {len(results)} tweets!")
    return results

# Step 3: Save tweets to a CSV file
def save_tweets_to_csv(tweets, filename="tweets.csv"):
    df = pd.DataFrame(tweets)
    df.to_csv(filename, index=False)
    print(f"πŸ’Ύ Saved {len(tweets)} tweets to {filename}")
    return filename

# Main function that runs everything
async def main():
    # Ask the user what they want to search for
    search_term = input("What topic do you want to search for? ")
    max_count = int(input("How many tweets do you want to collect? "))
    
    # Run the whole process
    if await authenticate():
        tweets = await search_tweets(search_term, max_count)
        if tweets:
            filename = save_tweets_to_csv(tweets, f"{search_term}_tweets.csv")
            print(f"πŸŽ‰ All done! Your tweets are saved in {filename}")
            print(f"πŸ“Š You can open this file in Excel or Google Sheets")

# Run the program
if __name__ == "__main__":
    asyncio.run(main())

What This Code Does:

This script:

  1. Logs into Twitter using your credentials (or saved cookies for faster login)
  2. Searches for tweets on any topic you specify
  3. Extracts useful information from each tweet (text, user, likes, etc.)
  4. Saves everything to a CSV file you can open in Excel

Step 4: Run Your Tweet Extractor πŸš€

Now let's run the script! In your terminal, navigate to where you saved the file and run:

python tweet_extractor.py

The script will:

  1. Ask what topic you want to search for
  2. Ask how many tweets you want to collect
  3. Log into Twitter
  4. Find and save the tweets

You should see something like this:

What topic you want to search for? Python
How many tweets do you want to collect? 50
βœ… Logged in using saved cookies!
πŸ” Searching for tweets about 'Python'...
βœ… Found 50 tweets!
πŸ’Ύ Saved 50 tweets to Python_tweets.csv
πŸŽ‰ All done! Your tweets are saved in Python_tweets.csv
πŸ“Š You can open this file in Excel or Google Sheets

Step 5: Analyze Your Tweet Data πŸ“Š

Now you have a CSV file with tweets! You can:

  1. Open it in Excel or Google Sheets to view all the data
  2. Sort and filter to find the most popular tweets
  3. Create charts to visualize engagement

Here's a simple Python script to show some basic stats about your tweets:

import pandas as pd

# Load the CSV file
filename = input("Enter your CSV filename: ")
tweets = pd.read_csv(filename)

# Show some basic stats
print(f"Total tweets: {len(tweets)}")
print(f"Most liked tweet: {tweets.loc[tweets['likes'].idxmax()]['text']}")
print(f"Most retweeted tweet: {tweets.loc[tweets['retweets'].idxmax()]['text']}")

# Show average engagement
print(f"Average likes: {tweets['likes'].mean():.1f}")
print(f"Average retweets: {tweets['retweets'].mean():.1f}")

Save this as analyze_tweets.py and run it to see some quick stats about your tweets!


Common Problems and Solutions

"Login Failed" Error

Solution: Double-check your Twitter credentials. Make sure:

  • Your username/email is correct
  • Your password is correct
  • Your Twitter account isn't locked or requiring a verification code

"Module Not Found" Error

Solution: Make sure you've installed the required libraries:

pip install twikit pandas

Rate Limiting

Solution: If you're collecting a lot of tweets, Twitter might temporarily limit your access. Wait 15-30 minutes and try again with a smaller number of tweets.


Going Further: Advanced Tweet Collection

Want to do more with your tweet collection? Here are some ideas:

1. Filter Tweets by Language

Add this parameter to the search_tweet function:

tweets = await client.search_tweet(keyword, "Latest", lang="en")  # English only

2. Collect Tweets from a Specific User

async def get_user_tweets(username, max_tweets=50):
    user_id = await client.user_id_by_screen_name(username)
    tweets = await client.user_tweets(user_id)
    # Process tweets as before
    return results

3. Schedule Regular Collection

Use a scheduling library like schedule to run your script automatically:

import schedule
import time

def job():
    asyncio.run(main())

schedule.every().day.at("10:00").do(job)

while True:
    schedule.run_pending()
    time.sleep(1)

Conclusion

Congratulations! πŸŽ‰ You've learned how to:

  • Extract tweets for free using Python and Twikit
  • Save tweet data for analysis
  • Avoid paying for the expensive Twitter API

This approach is perfect for:

  • Research projects
  • Social media analysis
  • Tracking trends
  • Sentiment analysis

By following this guide, you now have a powerful tool for collecting Twitter data without spending money on API access.


Next Steps

Want to learn more? Check out these related articles:

Want to see the complete code? Visit my GitHub repository for the full solution and additional examples.

Happy tweet collecting! 🐦