docs(readme): remove DM from TiDB local dev environment and simplify setup

- Update README to reflect minimal TiDB standalone mode without DM components
- Remove DM master and worker service definitions, commands, and configs
- Delete all sync scripts, guides, and troubleshooting related to DM
- Simplify architecture diagram and component explanations accordingly
- Adjust quick start instructions to focus on TiDB standalone usage only
- Remove dependency on .env and sync configuration files
- Clean up docker-compose.yml to run only TiDB service in standalone mode
- Remove all references to data synchronization from TiDB Cloud or test environments
- Delete SYNC_GUIDE.md, TIDB_CLOUD_MIGRATION.md, and TROUBLESHOOTING.md files as obsolete
This commit is contained in:
tigermren 2025-10-17 01:31:05 +08:00
parent 9eb1428779
commit 3e5524e1a3
16 changed files with 22 additions and 1758 deletions

159
README.md
View File

@ -1,43 +1,22 @@
# TiDB Local Development Environment
A minimal TiDB instance with Data Migration (DM) for syncing data from test environments.
A minimal TiDB instance for local development.
## Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Your macOS (OrbStack) │
│ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ DataGrip │──────▶│ TiDB │ │
│ │ (port 4000) │ │ (Standalone) │ │
│ └──────────────┘ └───────▲───────┘ │
│ │ │
│ ┌───────┴────────┐ │
│ │ DM Worker │ │
│ │ (Sync Engine) │ │
│ └───────▲────────┘ │
│ │ │
│ ┌───────┴────────┐ │
│ │ DM Master │ │
│ │ (Orchestrator) │ │
│ └───────▲────────┘ │
│ │ │
└────────────────────────────────┼─────────────────────────────┘
(Continuous Sync)
┌────────────▼─────────────┐
│ Test TiDB Instance │
│ (Remote Environment) │
└──────────────────────────┘
┌─────────────────────────────────────────────────┐
│ Your macOS (OrbStack) │
│ │
│ ┌──────────────┐ ┌──────────────┐ │
│ │ DataGrip │──────▶│ TiDB │ │
│ │ (port 4000) │ │ (Standalone) │ │
│ └──────────────┘ └──────────────┘ │
└─────────────────────────────────────────────────┘
```
**Components:**
- **TiDB (Standalone Mode)**: Runs with embedded storage (unistore), no separate PD/TiKV needed
- **DM Master**: Manages data migration tasks
- **DM Worker**: Executes the actual data synchronization from test to local
- **DataGrip/MySQL Clients**: Connect directly to TiDB on port 4000
## Quick Reference
@ -45,24 +24,15 @@ A minimal TiDB instance with Data Migration (DM) for syncing data from test envi
| Service | Host | Port | User | Password |
|---------|------|------|------|----------|
| TiDB | `127.0.0.1` | `4000` | `root` | _(empty)_ |
| DM Master | `127.0.0.1` | `8261` | - | - |
### Useful Commands
```bash
# Start environment (TiDB only, no working sync)
``bash
# Start environment
./start.sh
# Test connection
./test-connection.sh
# NEW: Sync data from TiDB Cloud to local
./sync-data.sh
# Check sync status (deprecated - DM doesn't work with TiDB Cloud)
# ./status.sh
# or use the sync control script:
# ./sync-control.sh status
# Connect with MySQL client
mysql -h 127.0.0.1 -P 4000 -u root
@ -79,7 +49,6 @@ For DataGrip/DBeaver setup, see [DATAGRIP_SETUP.md](DATAGRIP_SETUP.md)
- macOS with OrbStack (or Docker Desktop)
- Docker Compose v2 (command: `docker compose`, not `docker-compose`)
- Access to test TiDB instance
**Check your setup:**
```bash
@ -90,52 +59,17 @@ For DataGrip/DBeaver setup, see [DATAGRIP_SETUP.md](DATAGRIP_SETUP.md)
## Configuration
1. **Copy and edit `.env` file**:
```bash
cp .env.example .env
# Edit .env with your test database credentials
```
Required variables:
- `TEST_DB_HOST`: Your test TiDB host
- `TEST_DB_PORT`: Test TiDB port (default: 4000)
- `TEST_DB_USER`: Test database username
- `TEST_DB_PASSWORD`: Test database password
- `DATABASE_NAME`: Database to sync
- `TABLES`: Comma-separated list of tables (e.g., "table1,table2,table3")
No configuration needed for the basic setup.
## Usage
### How the Sync Works
**Important Note**: The original DM-based sync approach doesn't work with TiDB Cloud Serverless because TiDB Cloud doesn't support the MySQL replication features that DM requires.
### Officially Recommended Approaches
See [TIDB_CLOUD_MIGRATION.md](TIDB_CLOUD_MIGRATION.md) for officially supported migration methods:
1. **Console Export + SQL Import** (simplest for development)
2. **Dumpling + TiDB Lightning** (for larger datasets)
3. **Periodic Sync Scripts** (created in this project)
4. **Application-Level Sync** (for real-time needs)
**For detailed sync operations, see [SYNC_GUIDE.md](SYNC_GUIDE.md)**
### Start the environment
```bash
``bash
docker compose up -d
```
This will:
1. Start TiDB in standalone mode
2. Start DM master and worker
3. Automatically configure the data source and sync task
4. Begin syncing data from test to local
### Check sync status
```bash
docker exec dm-master /dmctl --master-addr=dm-master:8261 query-status test-to-local
```
### Connect to local TiDB
@ -153,17 +87,16 @@ mysql -h 127.0.0.1 -P 4000 -u root
See [DATAGRIP_SETUP.md](DATAGRIP_SETUP.md) for detailed client setup instructions.
### View logs
```bash
```
# All services
docker compose logs -f
# Specific service
docker compose logs -f tidb
docker compose logs -f dm-worker
```
### Stop the environment
```bash
``bash
docker compose down
```
@ -172,76 +105,16 @@ docker compose down
docker compose down -v
```
## Manual DM Operations
### Check source configuration
```bash
docker exec dm-master /dmctl --master-addr=dm-master:8261 operate-source show
```
### Stop sync task
```bash
docker exec dm-master /dmctl --master-addr=dm-master:8261 stop-task test-to-local
```
### Start sync task
```bash
docker exec dm-master /dmctl --master-addr=dm-master:8261 start-task /configs/task.yaml
```
### Pause sync task
```bash
docker exec dm-master /dmctl --master-addr=dm-master:8261 pause-task test-to-local
```
### Resume sync task
```bash
docker exec dm-master /dmctl --master-addr=dm-master:8261 resume-task test-to-local
```
## Troubleshooting
### Common Issues
For detailed troubleshooting, see [TROUBLESHOOTING.md](TROUBLESHOOTING.md)
### Quick Checks
#### TiDB health check failing
```bash
# Check if TiDB is healthy
docker ps | grep tidb
# Should show: (healthy)
# If not, check logs:
docker logs tidb
```
#### DM task fails to start
- Check if test database is accessible from container
- Verify credentials in `.env`
- Check logs: `docker-compose logs dm-worker`
### Tables not syncing
- Ensure tables exist in source database
- Verify table names in `TABLES` variable
- Check task status for specific errors
### TiDB connection issues
- Verify TiDB is running: `docker ps | grep tidb`
- Check health: `docker exec tidb mysql -h 127.0.0.1 -P 4000 -u root -e "SELECT 1"`
### Re-initialize DM configuration
```bash
docker compose up -d dm-init
```
## Resource Usage
Default resource limits (suitable for local development):
- TiDB: 2 CPU, 2GB RAM
- DM Worker: 1 CPU, 1GB RAM
- DM Master: 0.5 CPU, 512MB RAM
Adjust in `docker-compose.yml` if needed.
@ -250,5 +123,3 @@ Adjust in `docker-compose.yml` if needed.
- **Docker Compose v2**: This project uses `docker compose` (v2 syntax). If you have v1, either upgrade or create an alias: `alias docker-compose='docker compose'`
- **Standalone Mode**: TiDB runs without distributed storage, suitable for development only
- **Data Persistence**: Data is stored in Docker volumes, persists across restarts
- **Sync Mode**: Configured for full + incremental sync ("all" mode)
- **OrbStack DNS**: Uses `.orb.local` hostnames for container networking

View File

@ -1,354 +0,0 @@
# Data Sync Guide
## How Sync Works
Your TiDB Data Migration (DM) setup continuously syncs data from your test environment to the local TiDB instance.
```
Test TiDB ──────┐
(DM reads changes)
DM Worker
(Applies to local)
Local TiDB
```
## Automatic Sync Setup
When you run `./start.sh`, the sync is **automatically configured and started**:
1. ✅ Reads your `.env` file for credentials and table list
2. ✅ Generates the DM task configuration
3. ✅ Configures the source connection (test TiDB)
4. ✅ Starts the sync task
5. ✅ Begins syncing data (full + incremental)
**You don't need to do anything manually!**
## Sync Modes
The sync is configured with `task-mode: "all"`:
- **Full sync**: Initial copy of all existing data
- **Incremental sync**: Continuous replication of changes (INSERT, UPDATE, DELETE)
## Managing Sync
### Easy Way (Recommended)
Use the [`sync-control.sh`](sync-control.sh) script:
```bash
# Check sync status
./sync-control.sh status
# Stop sync
./sync-control.sh stop
# Start sync
./sync-control.sh start
# Pause sync (temporarily)
./sync-control.sh pause
# Resume sync
./sync-control.sh resume
# Restart sync (stop + start)
./sync-control.sh restart
# Re-initialize configuration
./sync-control.sh reinit
```
### Advanced Way (dmctl)
Use `dmctl` directly:
```bash
# Check status
docker exec dm-master /dmctl --master-addr=dm-master:8261 query-status test-to-local
# Stop task
docker exec dm-master /dmctl --master-addr=dm-master:8261 stop-task test-to-local
# Start task
docker exec dm-master /dmctl --master-addr=dm-master:8261 start-task test-to-local
# Pause task
docker exec dm-master /dmctl --master-addr=dm-master:8261 pause-task test-to-local
# Resume task
docker exec dm-master /dmctl --master-addr=dm-master:8261 resume-task test-to-local
```
## Checking Sync Status
### Quick Check
```bash
./status.sh
```
This shows:
- Source configuration
- Task status (running, paused, stopped)
- Current sync position
- Error messages (if any)
- Local databases
### Detailed Status
```bash
./sync-control.sh status
```
### Verify Data Sync
Connect to local TiDB and verify:
```sql
-- Connect
mysql -h 127.0.0.1 -P 4000 -u root
-- Check databases
SHOW DATABASES;
-- Switch to your database
USE your_database;
-- Check tables
SHOW TABLES;
-- Verify row count
SELECT COUNT(*) FROM table1;
-- Compare with source (if you have access)
-- Run the same query on test environment
```
## Configuration Files
### Environment Variables (`.env`)
This is where you configure what to sync:
```bash
# Source database
TEST_DB_HOST=your-test-tidb-host
TEST_DB_PORT=4000
TEST_DB_USER=root
TEST_DB_PASSWORD=your-password
# What to sync
DATABASE_NAME=your_database
TABLES="table1,table2,table3"
```
### Task Template (`configs/task.yaml`)
This is just a **template for reference**. The actual task config is generated by [`scripts/init-dm.sh`](scripts/init-dm.sh).
### Source Config (`configs/source.yaml`)
Template for source database connection. Also generated dynamically.
## Common Scenarios
### Adding/Removing Tables
1. Edit `.env` and update `TABLES` variable:
```bash
TABLES="table1,table2,table3,new_table4"
```
2. Re-initialize:
```bash
./sync-control.sh reinit
```
### Changing Source Database
1. Edit `.env` with new credentials
2. Restart everything:
```bash
docker compose down
./start.sh
```
### Resetting Sync (Fresh Start)
```bash
# Stop and remove everything
docker compose down -v
# Start fresh
./start.sh
```
### Pausing Sync Temporarily
```bash
# Pause (without stopping containers)
./sync-control.sh pause
# Resume when ready
./sync-control.sh resume
```
## Monitoring
### View Logs
```bash
# All services
docker compose logs -f
# DM Worker only
docker compose logs -f dm-worker
# DM Master only
docker compose logs -f dm-master
# Init script logs
docker logs dm-init
```
### Check DM Master Status
```bash
docker exec dm-master /dmctl --master-addr=dm-master:8261 operate-source show
```
### Check DM Worker Status
```bash
docker ps | grep dm-worker
```
## Troubleshooting
### Sync Not Starting
**Check init logs:**
```bash
docker logs dm-init
```
**Common issues:**
- Wrong credentials in `.env`
- Test database not accessible
- Tables don't exist in source
**Solution:**
```bash
# Fix .env, then:
./sync-control.sh reinit
```
### Sync Stopped with Errors
**Check error message:**
```bash
./sync-control.sh status
```
**Common errors:**
- Network connectivity issues
- Permission denied on source
- Table schema mismatch
**Solution:**
```bash
# Fix the underlying issue, then:
./sync-control.sh restart
```
### Data Not Syncing
**Verify task is running:**
```bash
./sync-control.sh status
```
**Check if tables exist:**
```bash
# On source
mysql -h $TEST_DB_HOST -P 4000 -u root -p -e "SHOW TABLES FROM your_database;"
# On local
mysql -h 127.0.0.1 -P 4000 -u root -e "SHOW TABLES FROM your_database;"
```
**Compare row counts:**
```bash
# Create a verification script
mysql -h 127.0.0.1 -P 4000 -u root -e "SELECT COUNT(*) FROM your_database.table1;"
```
## Performance Tuning
### Adjust Sync Speed
Edit `docker-compose.yml`:
```yaml
dm-worker:
deploy:
resources:
limits:
cpus: '2' # Increase CPU
memory: 2G # Increase memory
```
Then restart:
```bash
docker compose down
docker compose up -d
```
### Monitor Resource Usage
```bash
docker stats
```
## Best Practices
1. **Always check status** after starting: `./status.sh`
2. **Monitor logs** during initial sync: `docker compose logs -f dm-worker`
3. **Verify data** in local TiDB after sync completes
4. **Use pause/resume** instead of stop/start for temporary halts
5. **Keep `.env` secure** - it contains credentials
6. **Test connectivity** before sync: `./test-connection.sh`
## FAQ
**Q: Is the sync real-time?**
A: Near real-time. Changes are replicated with minimal delay (usually seconds).
**Q: What happens if my laptop sleeps?**
A: Sync will resume automatically when containers restart.
**Q: Can I sync from multiple sources?**
A: Yes, but requires manual DM configuration. This setup is for single source.
**Q: Does it sync schema changes?**
A: Yes, DDL statements are replicated (CREATE, ALTER, DROP).
**Q: Can I sync to a different database name locally?**
A: Requires custom task configuration. Default syncs to same database name.
**Q: How do I exclude certain tables?**
A: Remove them from `TABLES` in `.env` and run `./sync-control.sh reinit`.
## See Also
- [README.md](README.md) - Main documentation
- [DATAGRIP_SETUP.md](DATAGRIP_SETUP.md) - Connect with GUI clients
- [scripts/init-dm.sh](scripts/init-dm.sh) - Initialization script
- [TiDB DM Documentation](https://docs.pingcap.com/tidb-data-migration/stable)

View File

@ -1,275 +0,0 @@
# TiDB Cloud to Local TiDB Migration Guide
This guide provides officially recommended approaches for migrating data from TiDB Cloud to a local TiDB instance, since TiDB Data Migration (DM) cannot be used with TiDB Cloud Serverless.
## Why DM Doesn't Work with TiDB Cloud
TiDB Data Migration (DM) fails with TiDB Cloud because:
1. **No MySQL binlog support** - TiDB Cloud Serverless doesn't expose binlog in the traditional MySQL way
2. **binlog_format is STATEMENT** - DM requires ROW format
3. **TiDB explicitly not supported as upstream** - DM is designed for MySQL/MariaDB → TiDB, not TiDB → TiDB
## Approach 1: Console Export + SQL Import (Simplest)
### Export from TiDB Cloud
1. **Using TiDB Cloud Console**:
- Navigate to your cluster in the TiDB Cloud Console
- Go to Data > Import
- Click "Export Data to" > "Local File"
- Select databases/tables to export
- Choose format (SQL recommended for small datasets)
- Click "Export"
2. **Using TiDB Cloud CLI**:
```bash
# Create export task
ticloud serverless export create -c <cluster-id>
# Download exported data
ticloud serverless export download -c <cluster-id> -e <export-id>
```
### Import to Local TiDB
```bash
# Import SQL file
mysql -h 127.0.0.1 -P 4000 -u root < exported_data.sql
# Or for CSV files
mysql -h 127.0.0.1 -P 4000 -u root -e "
LOAD DATA LOCAL INFILE 'table_data.csv'
INTO TABLE your_table
FIELDS TERMINATED BY ','
ENCLOSED BY '\"'
LINES TERMINATED BY '\n'
IGNORE 1 ROWS;"
```
## Approach 2: Dumpling + TiDB Lightning (For Larger Datasets)
### Prerequisites
Install TiDB tools:
```bash
# Install TiUP
curl --proto '=https' --tlsv1.2 -sSf https://tiup-mirrors.pingcap.com/install.sh | sh
source ~/.bash_profile
# Install tools
tiup install dumpling tidb-lightning
```
### Export with Dumpling
```bash
# Export data from TiDB Cloud
dumpling \
-u {TEST_DB_USER} \
-p {TEST_DB_PASSWORD} \
-P {TEST_DB_PORT} \
-h {TEST_DB_HOST} \
-o /tmp/tidb-export \
--filetype sql \
-r 20000 \
-F 256MiB
```
### Import with TiDB Lightning
1. **Create configuration file** (`lightning.toml`):
```toml
[lightning]
level = "info"
file = "tidb-lightning.log"
[tikv-importer]
backend = "local"
sorted-kv-dir = "/tmp/sorted-kv-dir"
[mydumper]
data-source-dir = "/tmp/tidb-export"
no-schema = false
[tidb]
host = "127.0.0.1"
port = 4000
user = "root"
password = ""
status-port = 10080
pd-addr = "127.0.0.1:2379"
```
2. **Run TiDB Lightning**:
```bash
tidb-lightning -config lightning.toml
```
## Approach 3: Periodic Sync Script
Create a script for periodic data sync:
### Export Script (`export-cloud.sh`)
```bash
#!/bin/bash
# Source .env file
source .env
# Export data using mysqldump (built-in tool)
mysqldump \
-h $TEST_DB_HOST \
-P $TEST_DB_PORT \
-u $TEST_DB_USER \
-p$TEST_DB_PASSWORD \
--single-transaction \
--routines \
--triggers \
$DATABASE_NAME \
$TABLES > /tmp/cloud-export.sql
echo "Export completed: /tmp/cloud-export.sql"
```
### Import Script (`import-local.sh`)
```bash
#!/bin/bash
# Import to local TiDB
mysql -h 127.0.0.1 -P 4000 -u root < /tmp/cloud-export.sql
echo "Import completed to local TiDB"
```
### Combined Sync Script (`sync-data.sh`)
```bash
#!/bin/bash
echo "🔄 Syncing data from TiDB Cloud to local TiDB..."
# Export from cloud
./export-cloud.sh
# Import to local
./import-local.sh
echo "✅ Sync completed!"
```
## Approach 4: Application-Level Sync (For Continuous Updates)
For real-time sync, implement in your application:
```python
# Example Python script for selective sync
import mysql.connector
# Connect to both databases
cloud_db = mysql.connector.connect(
host="gateway01.ap-northeast-1.prod.aws.tidbcloud.com",
port=4000,
user="3mmwxY44wQF4L6P.root",
password="JQ8wbPsXfx7xJOR5",
database="workflow_local"
)
local_db = mysql.connector.connect(
host="127.0.0.1",
port=4000,
user="root",
password="",
database="workflow_local"
)
# Sync specific tables
def sync_table(table_name):
# Get data from cloud
cloud_cursor = cloud_db.cursor()
cloud_cursor.execute(f"SELECT * FROM {table_name}")
rows = cloud_cursor.fetchall()
# Clear and insert into local
local_cursor = local_db.cursor()
local_cursor.execute(f"DELETE FROM {table_name}")
if rows:
placeholders = ','.join(['%s'] * len(rows[0]))
local_cursor.executemany(
f"INSERT INTO {table_name} VALUES ({placeholders})",
rows
)
local_db.commit()
print(f"Synced {len(rows)} rows to {table_name}")
# Sync your tables
sync_table("plans")
```
## Recommended Solution for Your Setup
For development purposes, I recommend:
1. **Use Approach 1** (Console Export + SQL Import) for simplicity
2. **Create helper scripts** for periodic sync
3. **Consider application-level sync** for real-time needs
### Quick Setup
Create these helper scripts in your project:
```bash
# Make scripts executable
chmod +x sync-data.sh export-cloud.sh import-local.sh
# Run sync
./sync-data.sh
```
## Limitations and Considerations
### TiDB Cloud Serverless Limitations
- No traditional MySQL binlog access
- Limited to export/import methods
- No direct replication support in most plans
### Performance Considerations
- Full table exports can be slow for large datasets
- Network bandwidth affects sync speed
- Consider incremental exports for large tables
### Security Notes
- Store credentials securely (use .env file)
- Use TLS connections when possible
- Rotate credentials regularly
## Troubleshooting
### Connection Issues
```bash
# Test connection to TiDB Cloud
mysql -h $TEST_DB_HOST -P $TEST_DB_PORT -u $TEST_DB_USER -p
# Test connection to local TiDB
mysql -h 127.0.0.1 -P 4000 -u root
```
### Export Errors
- Ensure user has SELECT privileges
- Check network connectivity
- Verify table existence
### Import Errors
- Check schema compatibility
- Ensure sufficient disk space
- Verify TiDB is running
## References
- [TiDB Cloud Export Documentation](https://docs.pingcap.com/tidbcloud/serverless-export/)
- [TiDB Migration Tools Overview](https://docs.pingcap.com/tidb/stable/migration-tools)
- [Dumpling Documentation](https://docs.pingcap.com/tidb/stable/dumpling-overview)
- [TiDB Lightning Documentation](https://docs.pingcap.com/tidb/stable/tidb-lightning-overview)
For production use cases, contact TiDB Cloud Support to discuss available replication options for your specific plan.

View File

@ -1,419 +0,0 @@
# Troubleshooting Guide
## Common Issues and Solutions
### 1. TiDB Health Check Failing
#### Symptom
```
dependency failed to start: container tidb is unhealthy
```
Even though you can connect to TiDB from your host machine, Docker health check fails.
#### Root Cause
The original health check tried to use `mysql` command inside the TiDB container:
```yaml
healthcheck:
test: ["CMD", "mysql", "-h", "127.0.0.1", "-P", "4000", "-u", "root", "-e", "SELECT 1"]
```
The TiDB Docker image doesn't include the MySQL client binary, so this check always failed.
#### Solution ✅
Use TiDB's built-in HTTP status endpoint instead:
```yaml
healthcheck:
test: ["CMD", "wget", "-q", "-O-", "http://127.0.0.1:10080/status"]
interval: 10s
timeout: 5s
retries: 3
start_period: 10s
```
**Why this works:**
- TiDB exposes a status endpoint on port 10080
- `wget` is available in the container
- Returns HTTP 200 when TiDB is ready
- `start_period` gives TiDB time to initialize before health checks begin
### 2. Docker Compose Version Warning
#### Symptom
```
WARN[0000] version is obsolete, it will be ignored
```
#### Solution ✅
Remove the `version` field from `docker-compose.yml`. Modern Docker Compose (v2) doesn't need it.
**Before:**
```yaml
version: '3.8'
services:
...
```
**After:**
```yaml
services:
...
```
### 3. Service Dependencies Not Starting in Order
#### Symptom
Services fail because dependencies aren't ready yet.
#### Solution ✅
Use proper health checks and dependency conditions:
```yaml
dm-worker:
depends_on:
tidb:
condition: service_healthy
dm-master:
condition: service_healthy
```
**Important:**
- Each dependency must have a working health check
- `start_period` prevents false negatives during startup
### 4. dm-init Fails to Start
#### Symptom
```
Error: dm-init exits immediately
```
#### Check:
```bash
docker logs dm-init
```
#### Common Causes:
**a) .env not configured:**
```bash
# Check if .env exists and has real values
cat .env
```
**Solution:**
```bash
# Copy template and edit
cp .env.example .env
vim .env
```
**b) Test database not reachable:**
```bash
# Test from dm-init container
docker run --rm --network tidb-network pingcap/dm:latest \
sh -c "wget -q -O- http://tidb:10080/status"
```
**c) Script syntax error:**
```bash
# Check init script
sh -n scripts/init-dm.sh
```
### 5. Containers Keep Restarting
#### Check Status:
```bash
docker ps -a
docker logs <container_name>
```
#### Common Issues:
**a) Port already in use:**
```
Error: bind: address already in use
```
**Solution:** Change ports in `docker-compose.yml`:
```yaml
ports:
- "14000:4000" # Changed from 4000:4000
```
**b) Out of memory:**
```
Error: OOM killed
```
**Solution:** Increase memory limits or free up system resources.
**c) Permission issues:**
```
Error: permission denied
```
**Solution:** Check volume permissions or run:
```bash
docker compose down -v # Remove volumes
docker compose up -d # Recreate
```
### 6. Sync Task Not Running
#### Check Status:
```bash
./status.sh
# or
./sync-control.sh status
```
#### Common Issues:
**a) Task not created:**
```bash
# Check if source is configured
docker exec dm-master /dmctl --master-addr=dm-master:8261 operate-source show
```
**Solution:**
```bash
./sync-control.sh reinit
```
**b) Wrong credentials:**
Check logs:
```bash
docker logs dm-worker
```
Fix `.env` and reinit:
```bash
vim .env
./sync-control.sh reinit
```
**c) Table doesn't exist:**
Verify tables exist on source database:
```bash
# Connect to test DB and check
SHOW TABLES FROM your_database;
```
### 7. Connection Refused to TiDB
#### Symptom
```
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (61)
```
#### Checks:
**a) Is TiDB running?**
```bash
docker ps | grep tidb
```
**b) Is it healthy?**
```bash
docker ps
# Look for "(healthy)" status
```
**c) Is port exposed?**
```bash
docker port tidb
# Should show: 4000/tcp -> 0.0.0.0:4000
```
**d) Test from inside container:**
```bash
docker exec tidb wget -q -O- http://127.0.0.1:10080/status
```
#### Solutions:
**If container not running:**
```bash
docker compose up -d tidb
docker logs tidb
```
**If unhealthy:**
```bash
# Wait for health check
sleep 15
docker ps
# If still unhealthy, check logs
docker logs tidb
```
**If port not exposed:**
```bash
# Recreate container
docker compose down
docker compose up -d
```
### 8. Data Not Syncing
#### Verify sync is running:
```bash
./sync-control.sh status
```
#### Check sync lag:
Look for "syncer" section in status output.
#### Common Issues:
**a) Sync paused:**
```bash
./sync-control.sh resume
```
**b) Sync stopped with error:**
```bash
# Check error in status output
./sync-control.sh status
# Fix the issue, then restart
./sync-control.sh restart
```
**c) Network issues:**
```bash
# Test connectivity from dm-worker to source
docker exec dm-worker ping -c 3 your-test-db-host
```
**d) Binlog not enabled on source:**
Source database must have binlog enabled for incremental sync.
### 9. Slow Sync Performance
#### Check resource usage:
```bash
docker stats
```
#### Solutions:
**a) Increase worker resources:**
Edit `docker-compose.yml`:
```yaml
dm-worker:
deploy:
resources:
limits:
cpus: '2'
memory: 2G
```
**b) Optimize batch size:**
See [TiDB DM Documentation](https://docs.pingcap.com/tidb-data-migration/stable/tune-configuration) for advanced tuning.
### 10. Docker Compose v1 vs v2 Issues
#### Symptom
```
docker: 'compose' is not a docker command
```
#### Solution
See [DOCKER_COMPOSE_V2.md](DOCKER_COMPOSE_V2.md) for:
- Upgrading to v2
- Creating an alias
- Compatibility mode
## Diagnostic Commands
### Check everything at once:
```bash
# Service status
docker compose ps
# Health checks
docker ps
# Logs (all services)
docker compose logs --tail=50
# Logs (specific service)
docker compose logs --tail=50 tidb
# Resource usage
docker stats --no-stream
# Network connectivity
docker network inspect tidb-network
```
### Test connectivity:
```bash
# From host to TiDB
mysql -h 127.0.0.1 -P 4000 -u root -e "SELECT 1"
# From host (HTTP)
curl http://127.0.0.1:10080/status
# From container to TiDB
docker run --rm --network tidb-network pingcap/dm:latest \
wget -q -O- http://tidb:10080/status
```
### Reset everything:
```bash
# Stop and remove everything (including data)
docker compose down -v
# Start fresh
./start.sh
```
## Getting Help
### Collect diagnostic information:
```bash
# Create a diagnostic report
echo "=== Docker Version ===" > diagnostic.txt
docker --version >> diagnostic.txt
docker compose version >> diagnostic.txt
echo -e "\n=== Container Status ===" >> diagnostic.txt
docker ps -a >> diagnostic.txt
echo -e "\n=== TiDB Logs ===" >> diagnostic.txt
docker logs tidb --tail=50 >> diagnostic.txt 2>&1
echo -e "\n=== DM Worker Logs ===" >> diagnostic.txt
docker logs dm-worker --tail=50 >> diagnostic.txt 2>&1
echo -e "\n=== DM Init Logs ===" >> diagnostic.txt
docker logs dm-init >> diagnostic.txt 2>&1
echo -e "\n=== Network Info ===" >> diagnostic.txt
docker network inspect tidb-network >> diagnostic.txt
echo "Report saved to diagnostic.txt"
```
### Useful resources:
- [TiDB Documentation](https://docs.pingcap.com/tidb/stable)
- [TiDB DM Documentation](https://docs.pingcap.com/tidb-data-migration/stable)
- Project documentation:
- [README.md](README.md)
- [SYNC_GUIDE.md](SYNC_GUIDE.md)
- [DATAGRIP_SETUP.md](DATAGRIP_SETUP.md)
- [DOCKER_COMPOSE_V2.md](DOCKER_COMPOSE_V2.md)
## Still Having Issues?
If none of these solutions work:
1. Check logs: `docker compose logs`
2. Create diagnostic report (see above)
3. Check if it's a known issue in TiDB/DM GitHub issues
4. Verify your environment meets prerequisites (see [README.md](README.md))

View File

@ -1,12 +0,0 @@
source-id: "test-tidb"
enable-gtid: false
enable-relay: false
from:
host: "${TEST_DB_HOST}"
port: ${TEST_DB_PORT}
user: "${TEST_DB_USER}"
password: "${TEST_DB_PASSWORD}"
security:
ssl-ca: ""
ssl-cert: ""
ssl-key: ""

View File

@ -1,52 +0,0 @@
# ==============================================================================
# DM Task Configuration Template
# ==============================================================================
#
# This file is a TEMPLATE and is NOT used directly by DM.
#
# The actual task configuration is dynamically generated by:
# scripts/init-dm.sh
#
# The script reads environment variables from .env and creates the real task.yaml
# with your specific database name and table list.
#
# HOW TO RUN THE SYNC:
# --------------------
# 1. Configure .env file with your credentials
# 2. Run: ./start.sh (auto-generates and starts sync)
# 3. Check status: ./status.sh or ./sync-control.sh status
#
# For detailed guide, see: SYNC_GUIDE.md
#
# MANUAL CONTROL:
# ---------------
# - Start: ./sync-control.sh start
# - Stop: ./sync-control.sh stop
# - Pause: ./sync-control.sh pause
# - Resume: ./sync-control.sh resume
# - Restart: ./sync-control.sh restart
# - Reinit: ./sync-control.sh reinit
#
# ==============================================================================
# Template structure (for reference):
# name: "test-to-local"
# task-mode: "all" # full + incremental sync
# target-database:
# host: "tidb"
# port: 4000
# user: "root"
# password: ""
#
# mysql-instances:
# - source-id: "test-tidb"
# block-allow-list: "sync-tables"
#
# block-allow-list:
# sync-tables:
# do-dbs: ["${DATABASE_NAME}"]
# do-tables:
# - db-name: "${DATABASE_NAME}"
# tbl-name: "table1"
# - db-name: "${DATABASE_NAME}"
# tbl-name: "table2"

View File

@ -1,15 +1,6 @@
# OrbStack optimized settings
x-common-settings: &common
restart: unless-stopped
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# Minimal TiDB setup - Standalone mode with unistore
services:
tidb:
<<: *common
image: pingcap/tidb:latest
container_name: tidb
hostname: tidb.orb.local # OrbStack DNS
@ -39,89 +30,11 @@ services:
reservations:
cpus: '1'
memory: 1G
dm-master:
<<: *common
image: pingcap/dm:latest
container_name: dm-master
hostname: dm-master.orb.local
command:
- /dm-master
- --master-addr=:8261
- --advertise-addr=dm-master:8261
ports:
- "8261:8261"
volumes:
- dm_master_data:/data
- ./configs:/configs:ro
healthcheck:
test: ["CMD", "wget", "-q", "-O-", "http://127.0.0.1:8261/status"]
interval: 10s
timeout: 5s
retries: 3
start_period: 5s
deploy:
resources:
limits:
cpus: '0.5'
memory: 512M
dm-worker:
<<: *common
image: pingcap/dm:latest
container_name: dm-worker
hostname: dm-worker.orb.local
command:
- /dm-worker
- --worker-addr=:8262
- --advertise-addr=dm-worker:8262
- --join=dm-master:8261
volumes:
- dm_worker_data:/data
- ./configs:/configs:ro
depends_on:
tidb:
condition: service_healthy
dm-master:
condition: service_healthy
healthcheck:
test: ["CMD", "wget", "-q", "-O-", "http://127.0.0.1:8262/status"]
interval: 10s
timeout: 5s
retries: 3
start_period: 5s
deploy:
resources:
limits:
cpus: '1'
memory: 1G
dm-init:
image: pingcap/dm:latest
container_name: dm-init
volumes:
- ./configs:/configs:ro
- ./scripts:/scripts:ro
environment:
- TEST_DB_HOST=${TEST_DB_HOST}
- TEST_DB_PORT=${TEST_DB_PORT}
- TEST_DB_USER=${TEST_DB_USER}
- TEST_DB_PASSWORD=${TEST_DB_PASSWORD}
- DATABASE_NAME=${DATABASE_NAME}
- TABLES=${TABLES}
depends_on:
dm-worker:
condition: service_healthy
command: ["/bin/sh", "/scripts/init-dm.sh"]
restart: "no"
restart: unless-stopped
volumes:
tidb_data:
driver: local
dm_master_data:
driver: local
dm_worker_data:
driver: local
networks:
default:

View File

@ -1,86 +0,0 @@
#!/bin/bash
echo "☁️ Exporting data from TiDB Cloud..."
# Check if .env exists
if [ ! -f .env ]; then
echo "❌ .env file not found!"
echo "📝 Please create .env file with your TiDB Cloud credentials"
exit 1
fi
# Source environment variables
source .env
# Validate required variables
if [ -z "$TEST_DB_HOST" ] || [ -z "$TEST_DB_USER" ] || [ -z "$TEST_DB_PASSWORD" ]; then
echo "❌ Missing database credentials in .env"
echo "📝 Required: TEST_DB_HOST, TEST_DB_USER, TEST_DB_PASSWORD"
exit 1
fi
# Create export directory
EXPORT_DIR="/tmp/tidb-cloud-export"
mkdir -p $EXPORT_DIR
# Test connection
echo "🔍 Testing connection to TiDB Cloud..."
mysql -h $TEST_DB_HOST -P ${TEST_DB_PORT:-4000} -u $TEST_DB_USER -p$TEST_DB_PASSWORD -e "SELECT 1" >/dev/null 2>&1
if [ $? -ne 0 ]; then
echo "❌ Cannot connect to TiDB Cloud"
exit 1
fi
echo "✅ Connected successfully"
# Export schema using SQL queries
echo "📦 Exporting schema..."
# Create database statement
echo "CREATE DATABASE IF NOT EXISTS \`${DATABASE_NAME:-workflow_local}\`;
USE \`${DATABASE_NAME:-workflow_local}\`;
" > $EXPORT_DIR/schema.sql
# Get table schemas
for table in ${TABLES//,/ }; do
echo "-- Table: $table"
mysql -h $TEST_DB_HOST -P ${TEST_DB_PORT:-4000} -u $TEST_DB_USER -p$TEST_DB_PASSWORD -e "SHOW CREATE TABLE \`${DATABASE_NAME:-workflow_local}\`.$table;" -N -s | cut -f2 >> $EXPORT_DIR/schema.sql
echo ";" >> $EXPORT_DIR/schema.sql
echo "" >> $EXPORT_DIR/schema.sql
done
# Check if export was successful
if [ ! -s "$EXPORT_DIR/schema.sql" ]; then
echo "❌ Schema export failed - empty file"
exit 1
fi
echo "✅ Schema exported to $EXPORT_DIR/schema.sql"
# Export data using SQL
echo "📦 Exporting data..."
# Clear data file
> $EXPORT_DIR/data.sql
# Export data for each table
for table in ${TABLES//,/ }; do
echo "-- Data for table: $table"
# Simple approach: export as CSV and convert to INSERT statements
mysql -h $TEST_DB_HOST -P ${TEST_DB_PORT:-4000} -u $TEST_DB_USER -p$TEST_DB_PASSWORD -e "SELECT * FROM \`${DATABASE_NAME:-workflow_local}\`.$table;" | sed '1d' > $EXPORT_DIR/${table}.csv
# If we have data, convert to INSERT statements
if [ -s "$EXPORT_DIR/${table}.csv" ]; then
# This is a simplified approach - for production use, you'd want a more robust CSV to SQL converter
echo "-- Note: Data export for $table requires manual conversion from CSV" >> $EXPORT_DIR/data.sql
echo "-- CSV file location: $EXPORT_DIR/${table}.csv" >> $EXPORT_DIR/data.sql
fi
done
echo "⚠️ Data export completed - CSV files created for manual import"
echo "📂 Export completed successfully!"
echo " Schema: $EXPORT_DIR/schema.sql"
echo " Data CSV files:"
for table in ${TABLES//,/ }; do
echo " $EXPORT_DIR/${table}.csv"
done

View File

@ -1,91 +0,0 @@
#!/bin/bash
echo "🏠 Importing data to local TiDB..."
# Check if local TiDB is accessible
mysql -h 127.0.0.1 -P 4000 -u root -e "SELECT 1" >/dev/null 2>&1
if [ $? -ne 0 ]; then
echo "❌ Cannot connect to local TiDB"
echo "📝 Make sure TiDB is running: ./start.sh"
exit 1
fi
# Check if export files exist
EXPORT_DIR="/tmp/tidb-cloud-export"
if [ ! -f "$EXPORT_DIR/schema.sql" ]; then
echo "❌ Export files not found!"
echo "📝 Run export-cloud.sh first"
exit 1
fi
# Import schema
echo "🏗️ Importing schema..."
mysql -h 127.0.0.1 -P 4000 -u root < $EXPORT_DIR/schema.sql 2>/dev/null
if [ $? -ne 0 ]; then
echo "❌ Schema import failed"
exit 1
fi
echo "✅ Schema imported successfully"
# Import data from CSV files
echo "📥 Importing data..."
for table in ${TABLES//,/ }; do
if [ -f "$EXPORT_DIR/${table}.csv" ]; then
echo " Importing data for table: $table"
# Count lines in CSV (excluding header if present)
line_count=$(wc -l < "$EXPORT_DIR/${table}.csv" | tr -d ' ')
if [ "$line_count" -gt 0 ]; then
# Use LOAD DATA LOCAL INFILE to import CSV
mysql -h 127.0.0.1 -P 4000 -u root --local-infile=1 -e "
USE ${DATABASE_NAME:-workflow_local};
LOAD DATA LOCAL INFILE '$EXPORT_DIR/${table}.csv'
INTO TABLE $table
FIELDS TERMINATED BY '\t'
LINES TERMINATED BY '\n'
IGNORE 0 LINES;
" 2>/dev/null
if [ $? -ne 0 ]; then
echo "⚠️ Warning: Failed to import data for table $table"
# Try alternative method - read CSV and generate INSERT statements
echo " Trying alternative import method..."
while IFS=$'\t' read -r col1 col2 col3 col4 col5; do
# Escape single quotes
col1_escaped=$(echo "$col1" | sed "s/'/''/g")
col2_escaped=$(echo "$col2" | sed "s/'/''/g")
col3_escaped=$(echo "$col3" | sed "s/'/''/g")
col4_escaped=$(echo "$col4" | sed "s/'/''/g")
col5_escaped=$(echo "$col5" | sed "s/'/''/g" | sed "s/NULL//")
# Handle NULL values
if [ "$col5_escaped" = "" ]; then
col5_sql="NULL"
else
col5_sql="'$col5_escaped'"
fi
# Only insert if we have data
if [ -n "$col1" ]; then
mysql -h 127.0.0.1 -P 4000 -u root -e "
USE ${DATABASE_NAME:-workflow_local};
INSERT INTO $table (id, name, description, type, parent_plan_id)
VALUES ('$col1_escaped', '$col2_escaped', '$col3_escaped', '$col4_escaped', $col5_sql);
" 2>/dev/null
fi
done < "$EXPORT_DIR/${table}.csv"
fi
# Count rows imported
row_count=$(mysql -h 127.0.0.1 -P 4000 -u root -e "USE ${DATABASE_NAME:-workflow_local}; SELECT COUNT(*) FROM $table;" -N -s 2>/dev/null)
echo " ✅ Imported $row_count rows into $table"
else
echo " No data to import for table $table"
fi
fi
done
echo "✅ Data import completed"
echo "🎉 Import completed!"

View File

@ -1,75 +0,0 @@
#!/bin/sh
set -e
echo "Waiting for DM master to be ready..."
sleep 5
# Check if it's TiDB Cloud (requires TLS)
if echo "$TEST_DB_HOST" | grep -q "tidbcloud.com"; then
echo "Detected TiDB Cloud - downloading CA certificate for TLS..."
wget -q -O /tmp/isrgrootx1.pem https://letsencrypt.org/certs/isrgrootx1.pem
# Generate source.yaml with TLS for TiDB Cloud
cat > /tmp/source.yaml <<EOF
source-id: "test-tidb"
enable-gtid: false
enable-relay: false
server-id: 101
from:
host: "$TEST_DB_HOST"
port: $TEST_DB_PORT
user: "$TEST_DB_USER"
password: "$TEST_DB_PASSWORD"
security:
ssl-ca: "/tmp/isrgrootx1.pem"
EOF
else
# Generate source.yaml without TLS for regular TiDB
cat > /tmp/source.yaml <<EOF
source-id: "test-tidb"
enable-gtid: false
enable-relay: false
from:
host: "$TEST_DB_HOST"
port: $TEST_DB_PORT
user: "$TEST_DB_USER"
password: "$TEST_DB_PASSWORD"
EOF
fi
echo "Creating DM source configuration..."
/dmctl --master-addr=dm-master:8261 operate-source create /tmp/source.yaml || true
# Generate task.yaml with multiple tables
echo "name: \"test-to-local\"
task-mode: \"all\"
target-database:
host: \"tidb\"
port: 4000
user: \"root\"
password: \"\"
mysql-instances:
- source-id: \"test-tidb\"
block-allow-list: \"sync-tables\"
block-allow-list:
sync-tables:
do-dbs: [\"$DATABASE_NAME\"]
do-tables:" > /tmp/task.yaml
# Add each table from TABLES variable
IFS=',' read -ra TABLE_ARRAY <<< "$TABLES"
for table in "${TABLE_ARRAY[@]}"; do
table=$(echo "$table" | xargs) # trim whitespace
echo " - db-name: \"$DATABASE_NAME\"
tbl-name: \"$table\"" >> /tmp/task.yaml
done
echo "Starting DM sync task..."
/dmctl --master-addr=dm-master:8261 start-task /tmp/task.yaml || true
echo "Checking task status..."
/dmctl --master-addr=dm-master:8261 query-status test-to-local
echo "DM initialization complete!"

View File

@ -1,26 +1,7 @@
#!/bin/bash
set -e
echo "🚀 Starting TiDB Local Environment..."
# Check if .env exists
if [ ! -f .env ]; then
echo "⚠️ .env file not found!"
echo "📝 Creating .env from template..."
cp .env.example .env
echo "✏️ Please edit .env file with your test database credentials"
echo " Then run this script again."
exit 1
fi
# Source .env to check if configured
source .env
if [ "$TEST_DB_HOST" = "your-test-tidb-host" ]; then
echo "⚠️ .env file needs configuration!"
echo "✏️ Please edit .env file with your test database credentials"
exit 1
fi
echo "🚀 Starting Minimal TiDB Environment..."
echo "🐳 Starting Docker containers..."
docker compose up -d
@ -36,15 +17,10 @@ echo "📊 Connection Info:"
echo " TiDB: mysql -h 127.0.0.1 -P 4000 -u root"
echo " DataGrip: Host: 127.0.0.1, Port: 4000, User: root, Password: (empty)"
echo ""
echo "🔄 To sync data from TiDB Cloud:"
echo " ./sync-data.sh"
echo ""
echo "🔍 Useful commands:"
echo " Test connection: ./test-connection.sh"
echo " Sync data: ./sync-data.sh"
echo " View logs: docker compose logs -f"
echo " Stop environment: docker compose down"
echo ""
echo "📖 For DataGrip setup: see DATAGRIP_SETUP.md"
echo "📘 For TiDB Cloud migration: see TIDB_CLOUD_MIGRATION.md"
echo ""

View File

@ -1,22 +0,0 @@
#!/bin/bash
echo "🔍 Checking DM Sync Status..."
echo ""
# Check if containers are running
if ! docker ps | grep -q dm-master; then
echo "❌ DM Master is not running. Start the environment first:"
echo " ./start.sh"
exit 1
fi
echo "📡 Source Configuration:"
docker exec dm-master /dmctl --master-addr=dm-master:8261 operate-source show
echo ""
echo "📊 Task Status:"
docker exec dm-master /dmctl --master-addr=dm-master:8261 query-status test-to-local
echo ""
echo "💾 Local TiDB Databases:"
docker exec tidb mysql -h 127.0.0.1 -P 4000 -u root -e "SHOW DATABASES;"

View File

@ -1,81 +0,0 @@
#!/bin/bash
echo "⚠️ WARNING: TiDB Data Migration (DM) is not compatible with TiDB Cloud Serverless"
echo "⚠️ This script is deprecated. Use ./sync-data.sh instead."
echo ""
echo "For officially supported migration approaches, see TIDB_CLOUD_MIGRATION.md"
echo ""
TASK_NAME="test-to-local"
DMCTL="docker exec dm-master /dmctl --master-addr=dm-master:8261"
show_usage() {
echo "Usage: ./sync-control.sh [command]"
echo ""
echo "Commands:"
echo " status - Show sync task status"
echo " start - Start the sync task"
echo " stop - Stop the sync task"
echo " pause - Pause the sync task"
echo " resume - Resume the sync task"
echo " restart - Restart the sync task (stop + start)"
echo " reinit - Re-initialize DM configuration"
echo ""
}
check_dm() {
if ! docker ps | grep -q dm-master; then
echo "❌ DM Master is not running"
echo " Start with: ./start.sh"
exit 1
fi
}
case "$1" in
status)
check_dm
echo "📊 Checking sync status for task: $TASK_NAME"
echo ""
$DMCTL query-status $TASK_NAME
;;
start)
check_dm
echo "▶️ Starting sync task: $TASK_NAME"
$DMCTL start-task $TASK_NAME
;;
stop)
check_dm
echo "⏹️ Stopping sync task: $TASK_NAME"
$DMCTL stop-task $TASK_NAME
;;
pause)
check_dm
echo "⏸️ Pausing sync task: $TASK_NAME"
$DMCTL pause-task $TASK_NAME
;;
resume)
check_dm
echo "▶️ Resuming sync task: $TASK_NAME"
$DMCTL resume-task $TASK_NAME
;;
restart)
check_dm
echo "🔄 Restarting sync task: $TASK_NAME"
$DMCTL stop-task $TASK_NAME
sleep 2
$DMCTL start-task $TASK_NAME
;;
reinit)
echo "🔄 Re-initializing DM configuration..."
docker compose up -d dm-init
echo ""
echo "⏳ Waiting for initialization..."
sleep 5
docker logs dm-init
;;
*)
show_usage
exit 1
;;
esac

View File

@ -1,29 +0,0 @@
#!/bin/bash
echo "🔄 Syncing data from TiDB Cloud to local TiDB..."
echo ""
# Export from TiDB Cloud
echo "☁️ Step 1: Exporting from TiDB Cloud"
./export-cloud.sh
if [ $? -ne 0 ]; then
echo "❌ Export failed"
exit 1
fi
echo ""
# Import to local TiDB
echo "🏠 Step 2: Importing to local TiDB"
./import-local.sh
if [ $? -ne 0 ]; then
echo "❌ Import failed"
exit 1
fi
echo ""
echo "✅ Data sync completed successfully!"
echo ""
echo "📊 Verify data:"
echo " mysql -h 127.0.0.1 -P 4000 -u root -e 'USE ${DATABASE_NAME:-workflow_local}; SHOW TABLES;'"
echo " mysql -h 127.0.0.1 -P 4000 -u root -e 'USE ${DATABASE_NAME:-workflow_local}; SELECT COUNT(*) FROM ${TABLES%%,*};'"

View File

@ -1,6 +1,6 @@
#!/bin/bash
echo "🔌 Testing TiDB Connection for DataGrip..."
echo "🔌 Testing TiDB Connection..."
echo ""
# Check if TiDB container is running
@ -22,7 +22,7 @@ if docker exec tidb mysql -h 127.0.0.1 -P 4000 -u root -e "SELECT VERSION();" 2>
echo "📊 Available databases:"
docker exec tidb mysql -h 127.0.0.1 -P 4000 -u root -e "SHOW DATABASES;" 2>/dev/null
echo ""
echo "🎯 DataGrip Connection Settings:"
echo "🎯 Connection Settings:"
echo " Host: 127.0.0.1"
echo " Port: 4000"
echo " User: root"