We formulate and characterize a fundamental tradeoff relationship between "computation load" and "data shuffling load" in distributed computing, and demonstrate that the two are inverse-linearly proportional to each other. We then propose a new framework for distributed computing, named Coded Distributed Computing (CDC), that exactly achieves this tradeoff. We demonstrate the impact of CDC in (1) speeding up commonly used structures for distributed computing, such as MapReduce and Spark, and (2) enabling a scalable solution for wireless distributed computing.