Free Tool

Generate Mock Data for Development - Accelerate Your Development Workflow

Generate realistic mock data for faster development, testing, and prototyping. Create comprehensive datasets for database seeding, API testing, and frontend development with our advanced tools.

10 min read
Updated 2024-01-15

Try Our Free Generator

Development Workflow Accelerator

Generate production-ready mock data tailored for development workflows. From database seeding to API testing, create comprehensive datasets that accelerate your development cycle and improve testing coverage.

Database Seeding

  • • Complete schema population
  • • Referential integrity maintained
  • • Realistic data relationships
  • • Bulk insert optimization
  • • PostgreSQL, MySQL, MongoDB
npm run seed:dev --users=1000
--products=500 --orders=2000

API Testing

  • • REST & GraphQL responses
  • • Error scenario simulation
  • • Authentication test cases
  • • Rate limiting scenarios
  • • Pagination & filtering
mockApi.get('/users')
.reply(200, mockUsers)

Frontend Development

  • • Component story data
  • • Dashboard mock datasets
  • • Form validation scenarios
  • • Loading & error states
  • • Responsive design testing
const mockData = MockService
.generateDashboard()

CI/CD Integration

  • • Automated data generation on deploy
  • • GitHub Actions workflows
  • • Docker-based generation services
  • • Environment-specific configurations
  • • Artifact storage and sharing
# Daily mock data refresh
cron: '0 2 * * *'

Performance Testing

  • • Large-scale dataset generation
  • • Load testing scenarios
  • • Memory-efficient processing
  • • Realistic traffic patterns
  • • Benchmark data creation
generateLoadTest('e-commerce')

Development Workflow Examples

1
Local Development

Quick database seeding for feature development

npm run dev:seed
# 100 users, 50 products
2
Staging Environment

Production-scale data for integration testing

deploy:staging
# 10k users, 5k products
3
Demo Environment

Realistic data for client presentations

demo:refresh
# Curated realistic data

Dummy Data Generator in Action

See how our tool generates realistic test data with advanced customization options

Accelerate Development with Realistic Mock Data

Generate mock data that transforms your development workflow from data-starved iterations to rich, realistic testing environments. Modern software development demands comprehensive datasets for effective testing, prototyping, and demonstration, yet acquiring production-quality data often creates bottlenecks, privacy concerns, and compliance challenges.

Our Mock Data Generator provides instant access to realistic, customizable datasets that mirror production complexity while maintaining complete privacy safety. Whether you're seeding development databases, testing API endpoints, or building frontend prototypes, our tool generates the exact data patterns your applications need to thrive.

Why Mock Data Matters in Development

Development teams face constant pressure to deliver faster while maintaining quality. Mock data bridges the gap between empty databases and production readiness:

  • Faster Development Cycles: No waiting for sanitized production data exports
  • Realistic Testing Scenarios: Uncover edge cases and performance issues early
  • Team Collaboration: Shared, consistent datasets across development environments
  • Demonstration Ready: Professional demos with rich, believable content
  • Privacy Compliance: Zero risk of exposing sensitive customer information

Comprehensive Mock Data for Every Development Need

Database Seeding Strategies

Transform empty development databases into rich, production-like environments:

Relational Database Schemas

Generate complete database schemas with realistic relationships and constraints:

-- User management system with realistic data patterns
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    uuid UUID DEFAULT gen_random_uuid(),
    username VARCHAR(50) UNIQUE NOT NULL,
    email VARCHAR(100) UNIQUE NOT NULL,
    password_hash VARCHAR(255) NOT NULL,
    first_name VARCHAR(50) NOT NULL,
    last_name VARCHAR(50) NOT NULL,
    date_of_birth DATE,
    phone VARCHAR(20),
    avatar_url TEXT,
    bio TEXT,
    location VARCHAR(100),
    website_url TEXT,
    social_links JSONB,
    preferences JSONB DEFAULT '{}',
    email_verified BOOLEAN DEFAULT FALSE,
    account_status VARCHAR(20) DEFAULT 'active',
    last_login_at TIMESTAMP,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- Content management with realistic posting patterns
CREATE TABLE posts (
id SERIAL PRIMARY KEY,
user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,
title VARCHAR(200) NOT NULL,
slug VARCHAR(250) UNIQUE NOT NULL,
content TEXT NOT NULL,
excerpt TEXT,
featured_image_url TEXT,
category_id INTEGER REFERENCES categories(id),
tags TEXT[],
meta_description VARCHAR(160),
meta_keywords TEXT[],
status VARCHAR(20) DEFAULT 'draft',
visibility VARCHAR(20) DEFAULT 'public',
published_at TIMESTAMP,
view_count INTEGER DEFAULT 0,
like_count INTEGER DEFAULT 0,
comment_count INTEGER DEFAULT 0,
reading_time_minutes INTEGER,
seo_score INTEGER,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- E-commerce product catalog with inventory management
CREATE TABLE products (
id SERIAL PRIMARY KEY,
sku VARCHAR(50) UNIQUE NOT NULL,
name VARCHAR(200) NOT NULL,
description TEXT,
short_description VARCHAR(500),
category_id INTEGER REFERENCES categories(id),
brand_id INTEGER REFERENCES brands(id),
price DECIMAL(10,2) NOT NULL,
compare_at_price DECIMAL(10,2),
cost_per_item DECIMAL(10,2),
weight DECIMAL(8,3),
dimensions JSONB, -- {length, width, height}
material VARCHAR(100),
color VARCHAR(50),
size VARCHAR(20),
stock_quantity INTEGER DEFAULT 0,
low_stock_threshold INTEGER DEFAULT 5,
track_inventory BOOLEAN DEFAULT TRUE,
allow_backorders BOOLEAN DEFAULT FALSE,
is_digital BOOLEAN DEFAULT FALSE,
requires_shipping BOOLEAN DEFAULT TRUE,
tax_category VARCHAR(50),
status VARCHAR(20) DEFAULT 'active',
featured BOOLEAN DEFAULT FALSE,
images JSONB, -- Array of image URLs
seo_title VARCHAR(200),
seo_description VARCHAR(300),
search_keywords TEXT[],
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

NoSQL Document Structures

Create complex MongoDB documents with nested relationships:

// User profile documents with rich metadata
const userProfiles = [
  {
    _id: ObjectId(),
    userId: "user_001",
    personal: {
      firstName: "Alexandra",
      lastName: "Chen",
      email: "alexandra.chen@example.com",
      phone: "+1-555-0123",
      dateOfBirth: ISODate("1992-03-15"),
      gender: "female",
      nationality: "Canadian"
    },
    address: {
      primary: {
        street: "123 Maple Street",
        city: "Vancouver",
        province: "BC",
        postalCode: "V6K 2A1",
        country: "Canada",
        type: "home"
      },
      shipping: {
        street: "456 Office Blvd",
        city: "Vancouver",
        province: "BC", 
        postalCode: "V6B 1A1",
        country: "Canada",
        type: "work"
      }
    },
    preferences: {
      language: "en-CA",
      timezone: "America/Vancouver",
      currency: "CAD",
      communications: {
        email: true,
        sms: false,
        push: true,
        marketing: true
      },
      privacy: {
        profileVisibility: "friends",
        showEmail: false,
        showPhone: false,
        allowDataCollection: true
      }
    },
    social: {
      connections: [
        { platform: "linkedin", username: "alexandra-chen", verified: true },
        { platform: "github", username: "achen-dev", verified: true },
        { platform: "twitter", username: "alex_codes", verified: false }
      ],
      followers: 1247,
      following: 892
    },
    activity: {
      lastLogin: ISODate("2024-01-15T09:30:00Z"),
      loginCount: 342,
      sessionDuration: 28.5, // minutes average
      lastActivity: ISODate("2024-01-15T11:45:00Z"),
      accountAge: 1095 // days
    },
    metadata: {
      source: "organic_signup",
      referrer: "github.com",
      experiments: ["new_dashboard_v2", "premium_trial"],
      tags: ["developer", "premium_candidate", "active_user"],
      customFields: {
        company: "TechStart Solutions",
        role: "Senior Full Stack Developer",
        experience: "5-7 years",
        interests: ["React", "Node.js", "Machine Learning"]
      }
    },
    createdAt: ISODate("2021-01-15T14:22:00Z"),
    updatedAt: ISODate("2024-01-15T11:45:00Z")
  }
];

// Product catalog with complex attributes
const productCatalog = [
{
_id: ObjectId(),
sku: "LAPTOP-MBA-M2-256",
name: "MacBook Air 13-inch with M2 Chip",
description: "Supercharged by the next-generation M2 chip, the redesigned MacBook Air combines incredible performance and energy efficiency.",
category: {
primary: "Electronics",
secondary: "Computers",
tertiary: "Laptops"
},
brand: {
name: "Apple",
manufacturerId: "BRAND_APPLE",
warranty: "1 year limited warranty"
},
pricing: {
basePrice: 1199.00,
salePrice: 1099.00,
currency: "USD",
priceHistory: [
{ price: 1199.00, date: ISODate("2023-01-01"), reason: "launch_price" },
{ price: 1099.00, date: ISODate("2024-01-01"), reason: "new_year_sale" }
],
costPrice: 850.00,
margin: 22.6 // percentage
},
specifications: {
processor: {
brand: "Apple",
model: "M2",
cores: 8,
baseClock: "3.49 GHz",
architecture: "ARM64"
},
memory: {
size: "8GB",
type: "LPDDR5",
speed: "6400 MHz",
expandable: false
},
storage: {
size: "256GB",
type: "SSD",
interface: "PCIe NVMe",
expandable: false
},
display: {
size: "13.6 inches",
resolution: "2560 x 1664",
technology: "Liquid Retina",
brightness: "500 nits",
colorGamut: "P3 wide color"
},
connectivity: {
wifi: "802.11ax Wi-Fi 6",
bluetooth: "5.3",
ports: ["2x Thunderbolt 3", "3.5mm headphone jack"]
},
physical: {
dimensions: {
length: 30.41, // cm
width: 21.5, // cm
height: 1.13 // cm
},
weight: 1.24, // kg
color: "Midnight",
material: "100% recycled aluminum"
}
},
inventory: {
totalStock: 45,
availableStock: 38,
reservedStock: 7,
lowStockThreshold: 10,
restockDate: ISODate("2024-02-01"),
supplier: "Apple Direct",
warehouseLocation: "Cupertino, CA"
},
media: {
images: [
{
url: "/images/macbook-air-m2-front.jpg",
alt: "MacBook Air M2 front view",
type: "primary",
order: 1
},
{
url: "/images/macbook-air-m2-side.jpg",
alt: "MacBook Air M2 side profile",
type: "gallery",
order: 2
}
],
videos: [
{
url: "/videos/macbook-air-m2-demo.mp4",
title: "MacBook Air M2 Performance Demo",
duration: 120 // seconds
}
]
},
seo: {
metaTitle: "MacBook Air 13-inch M2 - Ultra-Fast Performance | TechStore",
metaDescription: "Experience blazing performance with the MacBook Air M2. 8-core CPU, stunning Retina display, all-day battery life. Free shipping available.",
keywords: ["MacBook Air", "M2 chip", "laptop", "Apple", "ultrabook"],
canonicalUrl: "/products/macbook-air-13-m2-256gb"
},
analytics: {
views: 2847,
clicks: 189,
conversions: 23,
conversionRate: 12.2, // percentage
avgTimeOnPage: 4.2, // minutes
bounceRate: 35.8 // percentage
},
createdAt: ISODate("2023-06-15T10:00:00Z"),
updatedAt: ISODate("2024-01-15T16:30:00Z")
}
];

API Testing with Comprehensive Mock Data

Generate realistic API responses for thorough testing coverage:

REST API Mock Responses

{
  "endpoints": {
    "GET /api/users": {
      "success_response": {
        "status": 200,
        "data": {
          "users": [
            {
              "id": 1001,
              "uuid": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
              "username": "sarah_dev_2024",
              "email": "sarah.johnson@techcorp.com",
              "profile": {
                "firstName": "Sarah",
                "lastName": "Johnson", 
                "avatar": "https://cdn.example.com/avatars/sarah_j.jpg",
                "bio": "Full-stack developer passionate about clean code and user experience",
                "location": "San Francisco, CA",
                "website": "https://sarahjohnson.dev",
                "socialLinks": {
                  "github": "https://github.com/sarahjdev",
                  "linkedin": "https://linkedin.com/in/sarah-johnson-dev",
                  "twitter": "https://twitter.com/sarah_codes"
                }
              },
              "stats": {
                "postsCount": 47,
                "followersCount": 1284,
                "followingCount": 892,
                "likesReceived": 3456,
                "commentsReceived": 987
              },
              "preferences": {
                "theme": "dark",
                "language": "en",
                "timezone": "America/Los_Angeles",
                "emailNotifications": true,
                "pushNotifications": false
              },
              "subscription": {
                "plan": "premium",
                "status": "active",
                "expiresAt": "2024-12-31T23:59:59Z",
                "features": ["advanced_analytics", "priority_support", "custom_themes"]
              },
              "lastActivity": "2024-01-15T10:30:00Z",
              "createdAt": "2022-03-15T14:22:00Z",
              "updatedAt": "2024-01-15T09:15:00Z"
            }
          ],
          "pagination": {
            "currentPage": 1,
            "perPage": 20,
            "totalItems": 1847,
            "totalPages": 93,
            "hasNextPage": true,
            "hasPreviousPage": false
          }
        },
        "meta": {
          "requestId": "req_1705320600_abc123",
          "timestamp": "2024-01-15T10:30:00Z",
          "processingTime": 125,
          "version": "v2.1"
        }
      },
      "error_responses": {
        "401": {
          "status": 401,
          "error": {
            "code": "UNAUTHORIZED",
            "message": "Authentication required to access this resource",
            "details": "Please provide a valid API key in the Authorization header"
          },
          "meta": {
            "requestId": "req_1705320600_def456",
            "timestamp": "2024-01-15T10:30:00Z",
            "documentation": "https://api.example.com/docs/authentication"
          }
        },
        "429": {
          "status": 429,
          "error": {
            "code": "RATE_LIMIT_EXCEEDED",
            "message": "Too many requests. Please try again later",
            "details": "Rate limit: 1000 requests per hour. Reset in 1847 seconds"
          },
          "meta": {
            "requestId": "req_1705320600_ghi789",
            "timestamp": "2024-01-15T10:30:00Z",
            "retryAfter": 1847
          }
        }
      }
    },
    "POST /api/orders": {
      "success_response": {
        "status": 201,
        "data": {
          "order": {
            "id": "order_2024_0115_001",
            "orderNumber": "ORD-24-000001",
            "customer": {
              "id": 1001,
              "email": "sarah.johnson@techcorp.com",
              "name": "Sarah Johnson"
            },
            "items": [
              {
                "productId": "prod_laptop_001",
                "sku": "LAPTOP-MBA-M2-256",
                "name": "MacBook Air 13-inch with M2 Chip",
                "quantity": 1,
                "unitPrice": 1099.00,
                "totalPrice": 1099.00,
                "discounts": [
                  {
                    "type": "coupon",
                    "code": "SAVE10",
                    "amount": 109.90,
                    "percentage": 10
                  }
                ]
              }
            ],
            "pricing": {
              "subtotal": 1099.00,
              "discountTotal": 109.90,
              "taxTotal": 79.13,
              "shippingTotal": 0.00,
              "total": 1068.23,
              "currency": "USD"
            },
            "shipping": {
              "method": "standard",
              "carrier": "FedEx",
              "estimatedDelivery": "2024-01-18",
              "address": {
                "name": "Sarah Johnson",
                "street": "123 Tech Street",
                "city": "San Francisco",
                "state": "CA",
                "zipCode": "94105",
                "country": "US",
                "phone": "+1-555-0123"
              }
            },
            "payment": {
              "method": "credit_card",
              "status": "authorized",
              "transactionId": "txn_abc123def456",
              "last4": "4242",
              "brand": "visa"
            },
            "status": "confirmed",
            "fulfillmentStatus": "pending",
            "tracking": {
              "number": null,
              "url": null,
              "carrier": null
            },
            "createdAt": "2024-01-15T10:30:00Z",
            "updatedAt": "2024-01-15T10:30:00Z",
            "estimatedFulfillment": "2024-01-16T14:00:00Z"
          }
        },
        "meta": {
          "requestId": "req_1705320600_order123",
          "timestamp": "2024-01-15T10:30:00Z",
          "processingTime": 847
        }
      },
      "validation_errors": {
        "400": {
          "status": 400,
          "error": {
            "code": "VALIDATION_ERROR",
            "message": "Request validation failed",
            "details": {
              "items": ["At least one item is required"],
              "shipping.address.zipCode": ["Invalid zip code format"],
              "payment.method": ["Payment method 'bitcoin' is not supported"]
            }
          },
          "meta": {
            "requestId": "req_1705320600_val_err",
            "timestamp": "2024-01-15T10:30:00Z"
          }
        }
      }
    }
  }
}

GraphQL Schema and Mock Resolvers

# Comprehensive GraphQL schema for mock data
type User {
  id: ID!
  username: String!
  email: String!
  profile: UserProfile!
  posts(first: Int, after: String): PostConnection!
  followers(first: Int): UserConnection!
  following(first: Int): UserConnection!
  stats: UserStats!
  createdAt: DateTime!
  updatedAt: DateTime!
}

type UserProfile {
firstName: String!
lastName: String!
fullName: String!
bio: String
avatar: String
location: String
website: String
socialLinks: SocialLinks
preferences: UserPreferences!
}

type UserStats {
postsCount: Int!
followersCount: Int!
followingCount: Int!
likesReceived: Int!
commentsReceived: Int!
totalViews: Int!
}

type Post {
id: ID!
title: String!
content: String!
excerpt: String
author: User!
category: Category
tags: [Tag!]!
featuredImage: String
status: PostStatus!
publishedAt: DateTime
stats: PostStats!
comments(first: Int): CommentConnection!
createdAt: DateTime!
updatedAt: DateTime!
}

type PostStats {
viewCount: Int!
likeCount: Int!
commentCount: Int!
shareCount: Int!
readingTime: Int! # minutes
}

Mock resolver implementations

const mockResolvers = {
Query: {
user: (_, { id }) => ({
id,
username: user_${id},
email: user${id}@example.com,
profile: {
firstName: faker.person.firstName(),
lastName: faker.person.lastName(),
bio: faker.lorem.paragraph(),
avatar: faker.image.avatar(),
location: ${faker.location.city()}, ${faker.location.state()},
website: faker.internet.url(),
socialLinks: {
github: https://github.com/user${id},
twitter: https://twitter.com/user${id},
linkedin: https://linkedin.com/in/user${id}
},
preferences: {
theme: faker.helpers.arrayElement(['light', 'dark', 'auto']),
language: 'en',
timezone: faker.location.timeZone()
}
},
stats: {
postsCount: faker.number.int({ min: 0, max: 100 }),
followersCount: faker.number.int({ min: 0, max: 5000 }),
followingCount: faker.number.int({ min: 0, max: 1000 }),
likesReceived: faker.number.int({ min: 0, max: 10000 }),
commentsReceived: faker.number.int({ min: 0, max: 2000 })
},
createdAt: faker.date.past(),
updatedAt: faker.date.recent()
}),

posts: (_, { first = 10, category, authorId }) => {
  const posts = Array.from({ length: first }, (_, i) => ({
    id: `post_${Date.now()}_${i}`,
    title: faker.lorem.sentence(),
    content: faker.lorem.paragraphs(5),
    excerpt: faker.lorem.paragraph(),
    category: category || {
      id: faker.string.uuid(),
      name: faker.helpers.arrayElement(['Technology', 'Design', 'Business', 'Lifestyle']),
      slug: faker.lorem.slug()
    },
    tags: Array.from({ length: faker.number.int({ min: 2, max: 6 }) }, () => ({
      id: faker.string.uuid(),
      name: faker.lorem.word(),
      slug: faker.lorem.slug()
    })),
    featuredImage: faker.image.url(),
    status: faker.helpers.arrayElement(['DRAFT', 'PUBLISHED', 'ARCHIVED']),
    publishedAt: faker.date.past(),
    stats: {
      viewCount: faker.number.int({ min: 0, max: 10000 }),
      likeCount: faker.number.int({ min: 0, max: 500 }),
      commentCount: faker.number.int({ min: 0, max: 100 }),
      shareCount: faker.number.int({ min: 0, max: 200 }),
      readingTime: faker.number.int({ min: 2, max: 15 })
    },
    createdAt: faker.date.past(),
    updatedAt: faker.date.recent()
  }));
  
  return {
    edges: posts.map(post => ({ node: post, cursor: btoa(post.id) })),
    pageInfo: {
      hasNextPage: Math.random() > 0.5,
      hasPreviousPage: false,
      startCursor: posts.length > 0 ? btoa(posts[0].id) : null,
      endCursor: posts.length > 0 ? btoa(posts[posts.length - 1].id) : null
    }
  };
}

},

User: {
posts: (user, { first = 10 }) => {
// Return user's posts with realistic data
return mockResolvers.Query.posts(null, { first, authorId: user.id });
},

followers: (user, { first = 10 }) => {
  const followers = Array.from({ length: first }, (_, i) => ({
    id: `follower_${user.id}_${i}`,
    username: faker.internet.userName(),
    profile: {
      firstName: faker.person.firstName(),
      lastName: faker.person.lastName(),
      avatar: faker.image.avatar()
    }
  }));
  
  return {
    edges: followers.map(follower => ({ node: follower })),
    pageInfo: { hasNextPage: false, hasPreviousPage: false }
  };
}

}
};

Frontend Development with Rich Mock Data

Accelerate frontend development with comprehensive component data:

React Component Mock Data

// Mock data service for React applications
export class MockDataService {
  static generateUserProfiles(count = 10) {
    return Array.from({ length: count }, (_, i) => ({
      id: i + 1,
      username: faker.internet.userName(),
      email: faker.internet.email(),
      avatar: faker.image.avatar(),
      profile: {
        firstName: faker.person.firstName(),
        lastName: faker.person.lastName(),
        bio: faker.lorem.paragraph(),
        location: `${faker.location.city()}, ${faker.location.state()}`,
        website: faker.internet.url(),
        joinDate: faker.date.past().toISOString(),
        verified: faker.datatype.boolean(0.3),
        premium: faker.datatype.boolean(0.15)
      },
      stats: {
        posts: faker.number.int({ min: 0, max: 150 }),
        followers: faker.number.int({ min: 0, max: 5000 }),
        following: faker.number.int({ min: 0, max: 1200 }),
        likes: faker.number.int({ min: 0, max: 15000 })
      },
      preferences: {
        theme: faker.helpers.arrayElement(['light', 'dark', 'auto']),
        language: faker.helpers.arrayElement(['en', 'es', 'fr', 'de']),
        notifications: {
          email: faker.datatype.boolean(0.8),
          push: faker.datatype.boolean(0.6),
          sms: faker.datatype.boolean(0.3)
        }
      }
    }));
  }

static generateDashboardData() {
return {
metrics: {
totalUsers: faker.number.int({ min: 10000, max: 100000 }),
activeUsers: faker.number.int({ min: 5000, max: 50000 }),
totalRevenue: faker.number.float({ min: 100000, max: 1000000, fractionDigits: 2 }),
growthRate: faker.number.float({ min: -10, max: 25, fractionDigits: 1 }),
conversionRate: faker.number.float({ min: 1, max: 8, fractionDigits: 2 })
},
chartData: {
userGrowth: Array.from({ length: 12 }, (, i) => ({
month: new Date(2024, i).toLocaleString('default', { month: 'short' }),
users: faker.number.int({ min: 1000, max: 8000 }),
revenue: faker.number.int({ min: 20000, max: 80000 })
})),
deviceBreakdown: [
{ device: 'Desktop', percentage: faker.number.int({ min: 40, max: 60 }) },
{ device: 'Mobile', percentage: faker.number.int({ min: 30, max: 50 }) },
{ device: 'Tablet', percentage: faker.number.int({ min: 5, max: 15 }) }
],
topPages: Array.from({ length: 10 }, (
, i) => ({
page: faker.internet.url(),
views: faker.number.int({ min: 1000, max: 50000 }),
bounceRate: faker.number.float({ min: 20, max: 80, fractionDigits: 1 }),
avgTime: faker.number.int({ min: 30, max: 300 }) // seconds
}))
},
recentActivity: Array.from({ length: 20 }, (_, i) => ({
id: i + 1,
user: {
name: faker.person.fullName(),
avatar: faker.image.avatar(),
email: faker.internet.email()
},
action: faker.helpers.arrayElement([
'signed up', 'made a purchase', 'updated profile',
'posted content', 'left a review', 'shared content'
]),
timestamp: faker.date.recent().toISOString(),
metadata: {
value: faker.number.float({ min: 10, max: 500, fractionDigits: 2 }),
location: faker.location.city(),
device: faker.helpers.arrayElement(['Desktop', 'Mobile', 'Tablet'])
}
}))
};
}

static generateEcommerceData() {
return {
products: Array.from({ length: 50 }, (_, i) => ({
id: i + 1,
sku: PRD-${faker.string.alphanumeric(8).toUpperCase()},
name: faker.commerce.productName(),
description: faker.commerce.productDescription(),
price: parseFloat(faker.commerce.price()),
compareAtPrice: Math.random() > 0.7 ? parseFloat(faker.commerce.price()) : null,
category: {
id: faker.number.int({ min: 1, max: 10 }),
name: faker.commerce.department(),
slug: faker.lorem.slug()
},
brand: {
id: faker.number.int({ min: 1, max: 20 }),
name: faker.company.name(),
logo: faker.image.url()
},
images: Array.from({ length: faker.number.int({ min: 1, max: 5 }) }, () => ({
url: faker.image.url(),
alt: faker.lorem.sentence()
})),
inventory: {
available: faker.number.int({ min: 0, max: 100 }),
reserved: faker.number.int({ min: 0, max: 10 }),
incoming: faker.number.int({ min: 0, max: 50 })
},
metrics: {
views: faker.number.int({ min: 100, max: 10000 }),
purchases: faker.number.int({ min: 0, max: 500 }),
rating: faker.number.float({ min: 3, max: 5, fractionDigits: 1 }),
reviewCount: faker.number.int({ min: 0, max: 200 })
},
seo: {
title: faker.lorem.sentence(),
description: faker.lorem.paragraph(),
keywords: Array.from({ length: 5 }, () => faker.lorem.word())
},
status: faker.helpers.arrayElement(['active', 'inactive', 'discontinued']),
featured: faker.datatype.boolean(0.2),
createdAt: faker.date.past().toISOString(),
updatedAt: faker.date.recent().toISOString()
})),

  orders: Array.from({ length: 100 }, (_, i) => ({
    id: i + 1,
    orderNumber: `ORD-${faker.string.alphanumeric(8).toUpperCase()}`,
    customer: {
      id: faker.number.int({ min: 1, max: 1000 }),
      name: faker.person.fullName(),
      email: faker.internet.email(),
      phone: faker.phone.number()
    },
    items: Array.from({ length: faker.number.int({ min: 1, max: 5 }) }, () => ({
      productId: faker.number.int({ min: 1, max: 50 }),
      name: faker.commerce.productName(),
      sku: `PRD-${faker.string.alphanumeric(8).toUpperCase()}`,
      quantity: faker.number.int({ min: 1, max: 3 }),
      unitPrice: parseFloat(faker.commerce.price()),
      totalPrice: function() { return this.quantity * this.unitPrice; }
    })),
    shipping: {
      method: faker.helpers.arrayElement(['standard', 'express', 'overnight']),
      cost: parseFloat(faker.commerce.price({ min: 0, max: 50 })),
      address: {
        name: faker.person.fullName(),
        street: faker.location.streetAddress(),
        city: faker.location.city(),
        state: faker.location.state(),
        zipCode: faker.location.zipCode(),
        country: faker.location.country()
      },
      estimatedDelivery: faker.date.future().toISOString()
    },
    payment: {
      method: faker.helpers.arrayElement(['credit_card', 'paypal', 'apple_pay', 'google_pay']),
      status: faker.helpers.arrayElement(['pending', 'authorized', 'captured', 'failed']),
      transactionId: faker.string.alphanumeric(16),
      amount: function() {
        return this.items.reduce((sum, item) => sum + item.totalPrice, 0) + this.shipping.cost;
      }
    },
    status: faker.helpers.arrayElement(['pending', 'processing', 'shipped', 'delivered', 'cancelled']),
    notes: faker.datatype.boolean(0.3) ? faker.lorem.sentence() : null,
    createdAt: faker.date.past().toISOString(),
    updatedAt: faker.date.recent().toISOString()
  }))
};

}

static generateFormTestData() {
return {
validInputs: [
{
firstName: faker.person.firstName(),
lastName: faker.person.lastName(),
email: faker.internet.email(),
phone: faker.phone.number(),
company: faker.company.name(),
website: faker.internet.url(),
message: faker.lorem.paragraphs(2)
}
],
invalidInputs: [
{
firstName: '', // Missing required field
lastName: faker.person.lastName(),
email: 'invalid-email', // Invalid format
phone: '123', // Too short
website: 'not-a-url' // Invalid URL
}
],
edgeCases: [
{
firstName: 'A'.repeat(100), // Very long input
lastName: 'José-María O'Connor', // Special characters
email: 'test+tag@subdomain.domain.co.uk', // Complex valid email
phone: '+1-800-555-0199 ext. 1234', // Phone with extension
website: 'https://subdomain.example-site.com/path?query=value#anchor'
}
]
};
}
}

Automated Mock Data Generation

CI/CD Pipeline Integration

Automate mock data generation in your development workflow:

GitHub Actions Workflow

# .github/workflows/mock-data-generation.yml
name: Generate Mock Data for Development

on:
push:
branches: [develop, staging]
schedule:
- cron: '0 2 * * *' # Daily at 2 AM
workflow_dispatch: # Manual trigger

jobs:
generate-mock-data:
runs-on: ubuntu-latest

services:
  postgres:
    image: postgres:15
    env:
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: testdb
    options: >-
      --health-cmd pg_isready
      --health-interval 10s
      --health-timeout 5s
      --health-retries 5
    ports:
      - 5432:5432

steps:
- uses: actions/checkout@v4

- name: Setup Node.js
  uses: actions/setup-node@v4
  with:
    node-version: '18'
    cache: 'npm'

- name: Install dependencies
  run: npm ci

- name: Setup database schema
  run: |
    npm run db:migrate
    npm run db:seed:schema
  env:
    DATABASE_URL: postgresql://postgres:postgres@localhost:5432/testdb

- name: Generate mock data
  run: |
    npm run generate:mock-data -- \
      --users 1000 \
      --products 500 \
      --orders 2000 \
      --posts 1500 \
      --comments 5000
  env:
    DATABASE_URL: postgresql://postgres:postgres@localhost:5432/testdb
    MOCK_DATA_SEED: ${{ github.run_number }}

- name: Validate data integrity
  run: npm run validate:mock-data
  env:
    DATABASE_URL: postgresql://postgres:postgres@localhost:5432/testdb

- name: Export data dumps
  run: |
    mkdir -p mock-data-exports
    npm run export:mock-data -- \
      --format json,csv,sql \
      --output mock-data-exports/
  env:
    DATABASE_URL: postgresql://postgres:postgres@localhost:5432/testdb

- name: Upload mock data artifacts
  uses: actions/upload-artifact@v4
  with:
    name: mock-data-${{ github.run_number }}
    path: mock-data-exports/
    retention-days: 30

- name: Update development environment
  if: github.ref == 'refs/heads/develop'
  run: |
    npm run deploy:mock-data -- \
      --environment development \
      --backup-existing
  env:
    DEV_DATABASE_URL: ${{ secrets.DEV_DATABASE_URL }}
    AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
    AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

- name: Notify team
  if: always()
  uses: 8398a7/action-slack@v3
  with:
    status: ${{ job.status }}
    text: |
      Mock data generation completed!
      - Users: 1000
      - Products: 500 
      - Orders: 2000
      - Build: ${{ github.run_number }}
      Download: ${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
  env:
    SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}

Docker-based Mock Data Service

# Dockerfile.mockdata
FROM node:18-alpine

WORKDIR /app

Install dependencies

COPY package*.json ./
RUN npm ci --only=production

Copy mock data generation scripts

COPY scripts/mock-data/ ./scripts/
COPY config/ ./config/

Set up environment

ENV NODE_ENV=development
ENV MOCK_DATA_VERSION=latest

Health check

HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3
CMD node scripts/health-check.js

Run mock data generation service

CMD ["node", "scripts/mock-data-service.js"]

# docker-compose.mock.yml
version: '3.8'

services:
  mock-data-generator:
    build:
      context: .
      dockerfile: Dockerfile.mockdata
    environment:
      - DATABASE_URL=postgresql://postgres:password@postgres:5432/mockdb
      - REDIS_URL=redis://redis:6379
      - GENERATION_SCHEDULE=0 */6 * * *
      - DATA_RETENTION_DAYS=7
    volumes:
      - ./mock-data-output:/app/output
      - ./config/mock-data.yml:/app/config/generation.yml
    depends_on:
      - postgres
      - redis
    restart: unless-stopped

  postgres:
    image: postgres:15-alpine
    environment:
      - POSTGRES_DB=mockdb
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=password
    volumes:
      - postgres_mock_data:/var/lib/postgresql/data
      - ./scripts/sql/schema.sql:/docker-entrypoint-initdb.d/01-schema.sql
    ports:
      - "5433:5432"

  redis:
    image: redis:7-alpine
    volumes:
      - redis_mock_data:/data
    ports:
      - "6380:6379"

  mock-api:
    build:
      context: .
      dockerfile: Dockerfile.mockapi
    environment:
      - DATABASE_URL=postgresql://postgres:password@postgres:5432/mockdb
      - REDIS_URL=redis://redis:6379
    ports:
      - "3001:3000"
    depends_on:
      - postgres
      - redis
    restart: unless-stopped

volumes:
  postgres_mock_data:
  redis_mock_data:

Performance Testing Datasets

Generate large-scale datasets for performance benchmarking:

// Performance testing data generator
class PerformanceDataGenerator {
  constructor(config) {
    this.config = {
      batchSize: 1000,
      maxMemoryUsage: '1GB',
      outputFormat: 'json',
      compression: true,
      ...config
    };
  }

async generateLoadTestData(scenario) {
const scenarios = {
'e-commerce-peak': {
users: 100000,
products: 50000,
orders: 500000,
reviews: 200000,
duration: '2 hours',
pattern: 'spike_then_sustained'
},
'social-media-viral': {
users: 50000,
posts: 100000,
comments: 1000000,
likes: 5000000,
duration: '1 hour',
pattern: 'exponential_growth'
},
'enterprise-migration': {
employees: 25000,
departments: 500,
projects: 10000,
documents: 500000,
duration: 'batch_processing',
pattern: 'steady_load'
}
};

const config = scenarios[scenario];
if (!config) throw new Error(`Unknown scenario: ${scenario}`);

console.log(`Generating ${scenario} test data...`);

const startTime = Date.now();
const results = await this.generateInBatches(config);
const endTime = Date.now();

return {
  scenario,
  config,
  results,
  performance: {
    generationTime: endTime - startTime,
    memoryUsage: process.memoryUsage(),
    recordsPerSecond: results.totalRecords / ((endTime - startTime) / 1000)
  }
};

}

async generateInBatches(config) {
const results = {
files: [],
totalRecords: 0,
checksums: {}
};

for (const [entityType, count] of Object.entries(config)) {
  if (typeof count !== 'number') continue;

  console.log(`Generating ${count} ${entityType}...`);
  
  const batches = Math.ceil(count / this.config.batchSize);
  for (let i = 0; i < batches; i++) {
    const batchSize = Math.min(this.config.batchSize, count - (i * this.config.batchSize));
    const batch = await this.generateEntityBatch(entityType, batchSize, i);
    
    const filename = `${entityType}_batch_${i.toString().padStart(4, '0')}.json`;
    await this.writeBatchToFile(filename, batch);
    
    results.files.push(filename);
    results.totalRecords += batch.length;
    
    // Progress tracking
    const progress = ((i + 1) / batches * 100).toFixed(1);
    console.log(`  ${entityType}: ${progress}% complete`);
  }
}

return results;

}

async generateEntityBatch(entityType, count, batchIndex) {
const generators = {
users: () => this.generateUser(),
products: () => this.generateProduct(),
orders: () => this.generateOrder(),
reviews: () => this.generateReview(),
posts: () => this.generatePost(),
comments: () => this.generateComment(),
employees: () => this.generateEmployee(),
departments: () => this.generateDepartment(),
projects: () => this.generateProject(),
documents: () => this.generateDocument()
};

const generator = generators[entityType];
if (!generator) throw new Error(`Unknown entity type: ${entityType}`);

return Array.from({ length: count }, (_, i) => ({
  ...generator(),
  _batchIndex: batchIndex,
  _recordIndex: i,
  _generatedAt: new Date().toISOString()
}));

}

generateUser() {
return {
id: faker.string.uuid(),
username: faker.internet.userName(),
email: faker.internet.email(),
password: faker.internet.password(),
profile: {
firstName: faker.person.firstName(),
lastName: faker.person.lastName(),
avatar: faker.image.avatar(),
bio: faker.lorem.paragraph(),
location: faker.location.city(),
website: faker.internet.url(),
birthDate: faker.date.birthdate().toISOString(),
preferences: {
theme: faker.helpers.arrayElement(['light', 'dark']),
language: faker.helpers.arrayElement(['en', 'es', 'fr']),
notifications: faker.datatype.boolean()
}
},
stats: {
loginCount: faker.number.int({ min: 0, max: 1000 }),
lastLogin: faker.date.recent().toISOString(),
accountValue: faker.number.float({ min: 0, max: 10000, fractionDigits: 2 })
},
createdAt: faker.date.past().toISOString()
};
}

async writeBatchToFile(filename, data) {
const fs = require('fs').promises;
const path = require('path');
const zlib = require('zlib');

const outputDir = './performance-test-data';
await fs.mkdir(outputDir, { recursive: true });

const filepath = path.join(outputDir, filename);
let content = JSON.stringify(data, null, this.config.compression ? 0 : 2);

if (this.config.compression) {
  content = zlib.gzipSync(content);
  filepath += '.gz';
}

await fs.writeFile(filepath, content);
return filepath;

}
}

// Usage example
const generator = new PerformanceDataGenerator({
batchSize: 5000,
compression: true,
outputFormat: 'json'
});

// Generate e-commerce load test data
await generator.generateLoadTestData('e-commerce-peak');


Supercharge your development workflow with production-ready mock data! Our comprehensive generator creates realistic datasets that accelerate development, improve testing coverage, and ensure your applications perform flawlessly from day one.

Data Field Types Visualization

Interactive diagram showing all supported data types and their relationships

Export Formats

Visual guide to JSON, CSV, SQL, and XML output formats

Integration Examples

Code snippets showing integration with popular frameworks

Ready to Generate Your Data?

Start creating high-quality synthetic data in minutes with our powerful, AI-driven generator. No registration required, unlimited usage.

Start Generating Now - Free

Frequently Asked Questions

Mock data is specifically designed for development workflows - it's often simpler, faster to generate, and optimized for testing scenarios. Synthetic data focuses more on statistical accuracy and privacy preservation. For development, mock data is usually more practical as it prioritizes speed and specific testing needs over perfect realism.
Use automated scripts in your pipeline to generate fresh mock data on each deployment. Set up database seeding as part of your deployment process, use GitHub Actions or similar tools to generate data artifacts, and ensure your staging environments get refreshed data regularly. This keeps your development environments consistent and realistic.
Start with schema-first generation - define your database structure, then generate data that respects foreign key constraints and business rules. Use batch processing for large datasets, implement data cleanup strategies, and consider using database snapshots for faster environment resets.
Always generate mock data from scratch using libraries like Faker.js rather than modifying real data. Implement validation checks in your generation process, use clearly fake domains and phone numbers, and establish team policies that prohibit using production data exports as mock data sources.
For unit tests: 10-100 records per entity. For integration tests: 100-1,000 records. For frontend development: 50-500 records for realistic UI testing. For performance testing: 10,000+ records to identify bottlenecks. Scale based on your actual use case and testing requirements.
Generate data in dependency order (users first, then orders that reference users), use foreign key constraints during generation, implement validation checks after generation, and consider using transaction-based generation to ensure consistency across related tables.
For JavaScript: Faker.js, MSW (Mock Service Worker), json-server. For Python: Faker, Factory Boy. For general use: Postman mock servers, WireMock, Mockoon. Choose based on your technology stack and whether you need client-side or server-side mocking.
Use placeholder image services (like Lorem Picsum), generate mock file metadata without actual files, implement file storage mocking in your development environment, and consider using CDN placeholder services for realistic media URLs that don't require actual file storage.
Yes, but with considerations. Use versioned data exports, implement environment-specific configurations (like different user counts), ensure data doesn't contain environment-specific URLs or credentials, and refresh data regularly to prevent staleness and unrealistic patterns.
Use batch processing instead of individual record generation, implement parallel processing for independent data types, cache frequently used lookup data, use database bulk insert operations, and consider generating data directly in the database using stored procedures for maximum performance.