Scale means different things:
Data volume: Petabytes of storage. Partition aggressively. Use columnar formats. Compress everything.
Query concurrency: Many users querying simultaneously. Cache common queries. Pre-aggregate where possible.
Pipeline throughput: Events per second. Kafka handles millions. Size partitions appropriately.
Team scale: Many engineers contributing. Standardize patterns. Enforce conventions. Document everything.
Cloud warehouses handle volume and concurrency well. Your job is partitioning, modeling, and cost management. Don't optimize prematurely, but design for x growth.