Thermodynamics

Contents 1 Chapter 1. Introduction 1 1.1 Classical Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . .

Views 901 Downloads 55 File size 5MB

Report DMCA / Copyright

DOWNLOAD FILE

Recommend stories

  • Author / Uploaded
  • MC
Citation preview

Contents 1

Chapter 1. Introduction

1

1.1

Classical Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.1.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.1.2

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

1.1.3

Branches of description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4

1.1.4

Thermodynamic equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

1.1.5

Non-equilibrium thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

1.1.6

Laws of thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

1.1.7

System models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

1.1.8

States and processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

1.1.9

Instrumentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

1.1.10 Conjugate variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

1.1.11 Potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

1.1.12 Axiomatics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

1.1.13 Scope of thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

14

1.1.14 Applied fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

1.1.15 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

1.1.16 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

1.1.17 Cited bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

1.1.18 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19

1.1.19 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

Statistical Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

1.2.1

Principles: mechanics and ensembles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

1.2.2

Statistical thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

1.2.3

Non-equilibrium statistical mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23

1.2.4

Applications outside thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

1.2.5

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

24

1.2.6

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

1.2.7

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

1.2

i

ii

CONTENTS

1.3

1.4

1.5

2

1.2.8

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

1.2.9

External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

Chemical Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

1.3.1

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

26

1.3.2

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27

1.3.3

Chemical energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27

1.3.4

Chemical reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

27

1.3.5

Non equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

29

1.3.6

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

1.3.7

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

1.3.8

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

1.3.9

External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

Equilibrium Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

30

1.4.1

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

1.4.2

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

Non-equilibrium Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

31

1.5.1

Scope of non-equilibrium thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32

1.5.2

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32

1.5.3

Basic concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

33

1.5.4

Stationary states, fluctuations, and stability . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34

1.5.5

Local thermodynamic equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34

1.5.6

Entropy in evolving systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

1.5.7

Flows and forces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

1.5.8

The Onsager relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

1.5.9

Speculated extremal principles for non-equilibrium processes . . . . . . . . . . . . . . . . . .

36

1.5.10 Applications of non-equilibrium thermodynamics . . . . . . . . . . . . . . . . . . . . . . . .

36

1.5.11 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

36

1.5.12 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

1.5.13 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

38

1.5.14 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

Chapter 2. Laws of Thermodynamics

40

2.1

Zeroth law of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40

2.1.1

Zeroth law as equivalence relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40

2.1.2

Foundation of temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

2.1.3

Physical meaning of the usual statement of the zeroth law . . . . . . . . . . . . . . . . . . . .

41

2.1.4

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

2.1.5

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

42

2.1.6

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43

CONTENTS

iii

2.2

First law of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43

2.2.1

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43

2.2.2

Conceptually revised statement, according to the mechanical approach . . . . . . . . . . . . .

45

2.2.3

Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

2.2.4

Various statements of the law for closed systems . . . . . . . . . . . . . . . . . . . . . . . . .

46

2.2.5

Evidence for the first law of thermodynamics for closed systems . . . . . . . . . . . . . . . . .

48

2.2.6

State functional formulation for infinitesimal processes . . . . . . . . . . . . . . . . . . . . . .

51

2.2.7

Spatially inhomogeneous systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

52

2.2.8

First law of thermodynamics for open systems . . . . . . . . . . . . . . . . . . . . . . . . . .

52

2.2.9

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

2.2.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

2.2.11 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

59

2.2.12 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

60

Second law of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

60

2.3.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

60

2.3.2

Various statements of the law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61

2.3.3

Corollaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

64

2.3.4

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

66

2.3.5

Statistical mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

68

2.3.6

Derivation from statistical mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

68

2.3.7

Living organisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

70

2.3.8

Gravitational systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

70

2.3.9

Non-equilibrium states . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71

2.3.10 Arrow of time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71

2.3.11 Irreversibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71

2.3.12 Quotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

72

2.3.13 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

73

2.3.14 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

73

2.3.15 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

2.3.16 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

Third law of Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

2.4.1

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

2.4.2

Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

2.4.3

Mathematical formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

80

2.4.4

Consequences of the third law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

81

2.4.5

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

82

2.4.6

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

82

2.4.7

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

82

2.3

2.4

iv 3

CONTENTS Chapter 3. History

83

3.1

History of thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83

3.1.1

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83

3.1.2

Branches of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

87

3.1.3

Entropy and the second law

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

3.1.4

Heat transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

3.1.5

Cryogenics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

3.1.6

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

3.1.7

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89

3.1.8

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89

3.1.9

External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89

An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction . . . . . . .

89

3.2.1

Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

89

3.2.2

Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

90

3.2.3

Reception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

90

3.2.4

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

90

3.2.5

Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

91

3.2

4

Chapter 4. System State

92

4.1

Control volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

92

4.1.1

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

92

4.1.2

Substantive derivative

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

92

4.1.3

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

4.1.4

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

4.1.5

External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

Ideal gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

4.2.1

Types of ideal gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

93

4.2.2

Classical thermodynamic ideal gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

94

4.2.3

Heat capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

95

4.2.4

Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

95

4.2.5

Thermodynamic potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

96

4.2.6

Speed of sound . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

96

4.2.7

Table of ideal gas equations

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97

4.2.8

Ideal quantum gases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97

4.2.9

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97

4.2.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97

Real gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

97

4.3.1

Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

98

4.3.2

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

99

4.2

4.3

CONTENTS

5

v

4.3.3

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4.3.4

External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

Chapter 5. System Processes 5.1

5.2

5.3

5.4

5.5

99

101

Thermodynamic process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 5.1.1

Kinds of process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

5.1.2

A cycle of quasi-static processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

5.1.3

Conjugate variable processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

5.1.4

Thermodynamic potentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

5.1.5

Polytropic processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

5.1.6

Processes classified by the second law of thermodynamics . . . . . . . . . . . . . . . . . . . . 103

5.1.7

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

5.1.8

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

5.1.9

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

Isobaric process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5.2.1

Specific heat capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

5.2.2

Sign convention for work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

5.2.3

Defining enthalpy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.2.4

Variable density viewpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.2.5

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.2.6

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

Isochoric process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 5.3.1

Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.3.2

Ideal Otto cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

5.3.3

Etymology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

5.3.4

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

5.3.5

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

5.3.6

External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

Isothermal process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 5.4.1

Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

5.4.2

Details for an ideal gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

5.4.3

Calculation of work

5.4.4

Entropy changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

5.4.5

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

5.4.6

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

Adiabatic process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110 5.5.1

Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

5.5.2

Adiabatic heating and cooling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

5.5.3

Ideal gas (reversible process) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

vi

CONTENTS 5.5.4

Graphing adiabats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

5.5.5

Etymology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

5.5.6

Conceptual significance in thermodynamic theory . . . . . . . . . . . . . . . . . . . . . . . . 116

5.5.7

Divergent usages of the word adiabatic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

5.5.8

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

5.5.9

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

5.5.10 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 5.6

5.7

5.8

6

Isenthalpic process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 5.6.1

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

5.6.2

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

Isentropic process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 5.7.1

Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

5.7.2

Isentropic processes in thermodynamic systems . . . . . . . . . . . . . . . . . . . . . . . . . 119

5.7.3

Isentropic flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

5.7.4

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

5.7.5

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

5.7.6

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

Polytropic process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 5.8.1

Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

5.8.2

Applicability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

5.8.3

Polytropic Specific Heat Capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

5.8.4

Relationship to ideal processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

5.8.5

Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

5.8.6

Other . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

5.8.7

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

5.8.8

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

Chapter 6. System Properties 6.1

6.2

125

Introduction to entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 6.1.1

Explanation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

6.1.2

Example of increasing entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

6.1.3

Origins and uses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

6.1.4

Heat and entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

6.1.5

Introductory descriptions of entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

6.1.6

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

6.1.7

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

6.1.8

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 6.2.1

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

CONTENTS

vii

6.2.2

Definitions and descriptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

6.2.3

Second law of thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

6.2.4

Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

6.2.5

Entropy change formulas for simple processes . . . . . . . . . . . . . . . . . . . . . . . . . . 137

6.2.6

Approaches to understanding entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

6.2.7

Interdisciplinary applications of entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

6.2.8

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

6.2.9

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

6.2.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 6.2.11 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 6.2.12 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 6.3

6.4

6.5

7

Pressure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 6.3.1

Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

6.3.2

Types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

6.3.3

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

6.3.4

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

6.3.5

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

6.3.6

External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

Thermodynamic temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 6.4.1

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

6.4.2

The relationship of temperature, motions, conduction, and thermal energy

6.4.3

Practical applications for thermodynamic temperature . . . . . . . . . . . . . . . . . . . . . . 160

6.4.4

Definition of thermodynamic temperature . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

6.4.5

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162

6.4.6

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

6.4.7

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

6.4.8

External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

. . . . . . . . . . . 154

Volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171 6.5.1

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

6.5.2

Heat and work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

6.5.3

Specific volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172

6.5.4

Gas volume . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173

6.5.5

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

6.5.6

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174

Chapter 7

175

7.1

Thermodynamic system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 7.1.1

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

7.1.2

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

viii

CONTENTS 7.1.3

Systems in equilibrium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

7.1.4

Walls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

7.1.5

Surroundings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

7.1.6

Closed system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

7.1.7

Isolated system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178

7.1.8

Selective transfer of matter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

7.1.9

Open system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

7.1.10 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 7.1.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 8

Chapter 8. Material Properties 8.1

181

Heat capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181 8.1.1

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

8.1.2

Units . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

8.1.3

Measurement of heat capacity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

8.1.4

Theory of heat capacity

8.1.5

Table of specific heat capacities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

8.1.6

Mass heat capacity of building materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

8.1.7

Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194

8.1.8

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

8.1.9

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186

8.1.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 8.1.11 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 8.2

8.3

Compressibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196 8.2.1

Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196

8.2.2

Thermodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197

8.2.3

Earth science . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

8.2.4

Fluid dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

8.2.5

Negative compressibility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

8.2.6

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

8.2.7

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

Thermal expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 8.3.1

Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

8.3.2

Coefficient of thermal expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

8.3.3

Expansion in solids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

8.3.4

Isobaric expansion in gases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202

8.3.5

Expansion in liquids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202

8.3.6

Expansion in mixtures and alloys . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202

8.3.7

Apparent and absolute expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

CONTENTS

ix

8.3.8

Examples and applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203

8.3.9

Thermal expansion coefficients for various materials . . . . . . . . . . . . . . . . . . . . . . . 204

8.3.10 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 8.3.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 8.3.12 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206 9

Chapter 9. Potentials 9.1

207

Thermodynamic potential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 9.1.1

Description and interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

9.1.2

Natural variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208

9.1.3

The fundamental equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208

9.1.4

The equations of state

9.1.5

The Maxwell relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

9.1.6

Euler integrals

9.1.7

The Gibbs–Duhem relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

9.1.8

Chemical reactions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

9.1.9

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210

9.1.10 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 9.1.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 9.1.12 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 9.1.13 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 9.2

Enthalpy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211 9.2.1

Origins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212

9.2.2

Formal definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212

9.2.3

Other expressions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

9.2.4

Physical interpretation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

9.2.5

Relationship to heat

9.2.6

Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

9.2.7

Diagrams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216

9.2.8

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218

9.2.9

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

9.2.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 9.2.11 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 9.2.12 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 9.3

Internal energy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 9.3.1

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219

9.3.2

Description and definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220

9.3.3

Internal energy of the ideal gas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221

9.3.4

Internal energy of a closed thermodynamic system . . . . . . . . . . . . . . . . . . . . . . . . 222

x

CONTENTS 9.3.5

Internal energy of multi-component systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 223

9.3.6

Internal energy in an elastic medium . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

9.3.7

History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 224

9.3.8

Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225

9.3.9

See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225

9.3.10 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 9.3.11 Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 10 Chapter 10. Equations

226

10.1 Ideal gas law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 10.1.1 Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 10.1.2 Applications to thermodynamic processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227 10.1.3 Deviations from ideal behavior of real gases . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 10.1.4 Derivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 10.1.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 10.1.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 10.1.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 10.1.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 11 Chapter 11. Fundamentals

230

11.1 Fundamental thermodynamic relation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 11.1.1 Derivation from the first and second laws of thermodynamics . . . . . . . . . . . . . . . . . . 230 11.1.2 Derivation from statistical mechanical principles . . . . . . . . . . . . . . . . . . . . . . . . . 231 11.1.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 11.1.4 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 11.2 Heat engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 11.2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 11.2.2 Everyday examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 11.2.3 Examples of heat engines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233 11.2.4 Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 11.2.5 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 11.2.6 Heat engine enhancements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 11.2.7 Heat engine processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236 11.2.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 11.2.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 11.3 Thermodynamic cycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 11.3.1 Heat and work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238 11.3.2 Modelling real systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 11.3.3 Well-known thermodynamic cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240

CONTENTS

xi

11.3.4 State functions and entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 11.3.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 11.3.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 11.3.7 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 11.3.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 12 Text and image sources, contributors, and licenses

243

12.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 12.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253 12.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257

Chapter 1

Chapter 1. Introduction 1.1 Classical Thermodynamics

Its laws are explained by statistical mechanics, in terms of the microscopic constituents. Thermodynamics applies to a wide variety of topics in science and engineering, especially physical chemistry, chemical engineering and mechanical engineering. Historically, the distinction between heat and temperature was studied in the 1750s by Joseph Black. Characteristically thermodynamic thinking began in the work of Carnot (1824) who believed that the efficiency of heat engines was the key that could help France win the Napoleonic Wars.[1] The Irish-born British physicist Lord Kelvin was the first to formulate a concise definition of thermodynamics in 1854:[2] Thermo-dynamics is the subject of the relation of heat to forces acting between contiguous parts of bodies, and the relation of heat to electrical agency.

Initially, thermodynamics, as applied to heat engines, was concerned with the thermal properties of their 'working materials’, such as steam, in an effort to increase the efficiency Annotated color version of the original 1824 Carnot heat engine and power output of engines. Thermodynamics was later showing the hot body (boiler), working body (system, steam), and expanded to the study of energy transfers in chemical procold body (water), the letter-labels indicate the stopping points in cesses, such as the investigation, published in 1840, of the Carnot cycle heats of chemical reactions[3] by Germain Hess, which was not originally explicitly concerned with the relation between Thermodynamics is a branch of physics concerned with energy exchanges by heat and work. From this evolved the heat and temperature and their relation to energy and work. study of chemical thermodynamics and the role of entropy It defines macroscopic variables, such as internal energy, in chemical reactions.[4][5][6][7][8][9][10][11] entropy, and pressure, that partly describe a body of matter or radiation. It states that the behavior of those variables is subject to general constraints, that are common to all mate- 1.1.1 Introduction rials, beyond the peculiar properties of particular materials. These general constraints are expressed in the four laws of Historically, thermodynamics arose from the study of two thermodynamics. Thermodynamics describes the bulk be- distinct kinds of transfer of energy, as heat and as work, havior of the body, not the microscopic behaviors of the and the relation of those to the system’s macroscopic varivery large numbers of its microscopic constituents, such as ables of volume, pressure and temperature.[12][13] As it demolecules. The basic results of thermodynamics rely on the veloped, thermodynamics began also to study transfers of existence of idealized states of thermodynamic equilibrium. matter. 1

2

CHAPTER 1. CHAPTER 1. INTRODUCTION

The plain term 'thermodynamics’ refers to a macroscopic description of bodies and processes.[14] Reference to atomic constitution is foreign to classical thermodynamics.[15] Usually the plain term 'thermodynamics’ refers by default to equilibrium as opposed to nonequilibrium thermodynamics. The qualified term 'statistical thermodynamics’ refers to descriptions of bodies and processes in terms of the atomic or other microscopic constitution of matter, using statistical and probabilistic reasoning.

empirical work in physics and chemistry.[9] Always associated with the material that constitutes a system, its working substance, are the walls that delimit the system, and connect it with its surroundings. The state variables chosen for the system should be appropriate for the natures of the walls and surroundings.[24] A thermodynamic operation is an artificial physical manipulation that changes the definition of a system or its surroundings. Usually it is a change of the permeability of a wall of the system,[25] that allows energy (as heat or work) or matter (mass) to be exchanged with the environment. For example, the partition between two thermodynamic systems can be removed so as to produce a single system. A thermodynamic operation that increases the range of possible transfers usually leads to a thermodynamic process of transfer of mass or energy that changes the state of the system, and the transfer occurs in natural accord with the laws of thermodynamics. But if the operation simply reduces the possible range of transfers, in general it does not initiate a process. The states of the system’s surrounding systems are assumed to be unchanging in time except when they are changed by a thermodynamic operation, whereupon a thermodynamic process can be initiated.

Thermodynamic equilibrium is one of the most important concepts for thermodynamics.[16][17] The temperature of a thermodynamic system is well defined, and is perhaps the most characteristic quantity of thermodynamics. As the systems and processes of interest are taken further from thermodynamic equilibrium, their exact thermodynamical study becomes more difficult. Relatively simple approximate calculations, however, using the variables of equilibrium thermodynamics, are of much practical value. Many important practical engineering cases, as in heat engines or refrigerators, can be approximated as systems consisting of many subsystems at different temperatures and pressures. If a physical process is too fast, the equilibrium thermodynamic variables, for example temperature, may not be well enough defined to provide a useful approximation. A thermodynamic system can also be defined in terms of Central to thermodynamic analysis are the definitions of the the cyclic processes that it can undergo.[26] A cyclic prosystem, which is of interest, and of its surroundings.[8][18] cess is a cyclic sequence of thermodynamic operations and The surroundings of a thermodynamic system consist of processes that can be repeated indefinitely often without physical devices and of other thermodynamic systems that changing the final state of the system. can interact with it. An example of a thermodynamic sur- For thermodynamics and statistical thermodynamics to rounding is a heat bath, which is held at a prescribed tem- apply to a physical system, it is necessary that its interperature, regardless of how much heat might be drawn from nal atomic mechanisms fall into one of two classes: it. There are four fundamental kinds of physical entities in thermodynamics:

• those so rapid that, in the time frame of the process of interest, the atomic states rapidly bring system to its own state of internal thermodynamic equilibrium; and

• states of a system, and the states of its surrounding systems

• those so slow that, in the time frame of the process of interest, they leave the system unchanged.[27][28]

• walls of a system,[19][20][21][22][23] • thermodynamic processes of a system, and • thermodynamic operations. This allows two fundamental approaches to thermodynamic reasoning, that in terms of states of a system, and that in terms of cyclic processes of a system.

The rapid atomic mechanisms account for the internal energy of the system. They mediate the macroscopic changes that are of interest for thermodynamics and statistical thermodynamics, because they quickly bring the system near enough to thermodynamic equilibrium. “When intermediate rates are present, thermodynamics and statistical mechanics cannot be applied.”[27] Such intermediate rate atomic processes do not bring the system near enough to thermodynamic equilibrium in the time frame of the macroscopic process of interest. This separation of time scales of atomic processes is a theme that recurs throughout the subject.

A thermodynamic system can be defined in terms of its states.[17] In this way, a thermodynamic system is a macroscopic physical object, explicitly specified in terms of macroscopic physical and chemical variables that describe its macroscopic properties. The macroscopic state variables For example, classical thermodynamics is characterized by of thermodynamics have been recognized in the course of its study of materials that have equations of state or char-

1.1. CLASSICAL THERMODYNAMICS

3

acteristic equations. They express equilibrium relations between macroscopic mechanical variables and temperature and internal energy. They express the constitutive peculiarities of the material of the system. A classical material can usually be described by a function that makes pressure dependent on volume and temperature, the resulting pressure being established much more rapidly than any imposed change of volume or temperature.[29][30][31][32] The present article takes a gradual approach to the subject, starting with a focus on cyclic processes and thermodynamic equilibrium, and then gradually beginning to further consider non-equilibrium systems. Thermodynamic facts can often be explained by viewing macroscopic objects as assemblies of very many microscopic or atomic objects that obey Hamiltonian dynamics.[8][33][34] The microscopic or atomic objects exist in species, the objects of each species being all alike. Because of this likeness, statistical methods can be used to account for the macroscopic properties of the thermodynamic system in terms of the properties of the microscopic species. Such explanation is called statistical thermodynamics; also often it is referred to by the term 'statistical mechanics', though this term can have a wider meaning, referring to 'microscopic objects’, such as economic quantities, that do not obey Hamiltonian dynamics.[33]

1.1.2

History

The history of thermodynamics as a scientific discipline generally begins with Otto von Guericke who, in about 1650,[35] built and designed the world’s first vacuum pump and demonstrated a vacuum using his Magdeburg hemispheres. Guericke was driven to make a vacuum in order to disprove Aristotle's long-held supposition that 'nature abhors a vacuum'. Shortly after Guericke, the physicist and chemist Robert Boyle had learned of Guericke’s designs and, in 1656, in coordination with the scientist Robert Hooke, built an air pump.[36] Using this pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In time, they formulated Boyle’s Law, which states that for a gas at constant temperature, its pressure and volume are inversely proportional. In 1679, based on these concepts, an associate of Boyle’s named Denis Papin built a steam digester, which was a closed vessel with a tightly fitting lid that confined steam until a high pressure was generated. Later versions of this design implemented a steam release valve that kept the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and a cylinder engine. He did not, however, follow through with his design. Nevertheless, in 1697, based on Papin’s designs, the engineer Thomas Savery built the first engine, followed

The thermodynamicists representative of the original eight founding schools of thermodynamics. The schools with the most-lasting effect in founding the modern versions of thermodynamics are the Berlin school, particularly as established in Rudolf Clausius’s 1865 textbook The Mechanical Theory of Heat, the Vienna school, while the statistical mechanics of Ludwig Boltzmann, and the Gibbsian school at Yale University, led by the American engineer Willard Gibbs' 1876 On the Equilibrium of Heterogeneous Substances launched chemical thermodynamics.

by Thomas Newcomen in 1712. Although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time. The concepts of heat capacity and latent heat, which were necessary for development of thermodynamics, were developed by Professor Joseph Black at the University of Glasgow, where James Watt worked as an instrument maker. Watt consulted with Black on tests of his steam engine, but it was Watt who conceived the idea of the external condenser, greatly raising the steam engine's efficiency.[37] All the previous work led Sadi Carnot, the “father of thermodynamics”, to publish Reflections on the Motive Power of Fire (1824), a discourse on heat, power, energy, and engine efficiency. The paper outlined the basic energetic relations between the Carnot engine, the Carnot cycle, and motive power. It marked the start of thermodynamics as a modern science.[11] The first thermodynamic textbook was written in 1859 by William Rankine, originally trained as a physicist and a professor of civil and mechanical engineering at the University of Glasgow.[38] The first and second laws of thermodynamics emerged simultaneously in the 1850s, primarily out of the works of William Rankine, Rudolf Clausius, and William Thomson (Lord Kelvin). The foundations of statistical thermodynamics were set out by physicists such as James Clerk Maxwell, Ludwig Boltz-

4

CHAPTER 1. CHAPTER 1. INTRODUCTION

engine in reference to Thomson’s 1849 phraseology,[42]:545 From 1873 to '76, the American mathematical physicist and Thomson’s note on Joules’ 1851 paper On the AirJosiah Willard Gibbs published a series of three papers, Engine. the most famous being "On the equilibrium of heteroge- In 1854, thermo-dynamics, as a functional term to denote neous substances".[4] Gibbs showed how thermodynamic the general study of the action of heat, was first used by processes, including chemical reactions, could be graphi- William Thomson in his paper “On the Dynamical Theory cally analyzed. By studying the energy, entropy, volume, of Heat”.[2] chemical potential, temperature and pressure of the In 1859, the closed compound form thermodynamics was thermodynamic system, one can determine whether a pro- first used by William Rankine in A Manual of the Steam Encess would occur spontaneously.[39] Chemical thermody- gine in a chapter on the Principles of Thermodynamics.[43] namics was further developed by Pierre Duhem,[5] Gilbert N. Lewis, Merle Randall,[6] and E. A. Guggenheim,[7][8] who applied the mathematical methods of Gibbs. 1.1.3 Branches of description mann, Max Planck, Rudolf Clausius and J. Willard Gibbs.

Thermodynamic systems are theoretical constructions used to model physical systems that exchange matter and energy in terms of the laws of thermodynamics. The study of thermodynamical systems has developed into several related branches, each using a different fundamental model as a theoretical or experimental basis, or applying the principles to varying types of systems. Classical thermodynamics

The lifetimes of some of the most important contributors to thermodynamics

Etymology The etymology of thermodynamics has an intricate history. It was first spelled in a hyphenated form as an adjective (thermo-dynamic) in 1849 and from 1854 to 1859 as the hyphenated noun thermo-dynamics to represent the science of heat and motive power and thereafter as thermodynamics.

Classical thermodynamics accounts for the adventures of a thermodynamic system in terms, either of its time-invariant equilibrium states, or else of its continually repeated cyclic processes, but, formally, not both in the same account. It uses only time-invariant, or equilibrium, macroscopic quantities measurable in the laboratory, counting as timeinvariant a long-term time-average of a quantity, such as a flow, generated by a continually repetitive process.[44][45] In classical thermodynamics, rates of change are not admitted as variables of interest. An equilibrium state stands endlessly without change over time, while a continually repeated cyclic process runs endlessly without a net change in the system over time.

In the account in terms of equilibrium states of a system, a The components of the word thermo-dynamic are derived state of thermodynamic equilibrium in a simple system is from the Greek words θέρμη therme, meaning “heat,” and spatially homogeneous. δύναμις dynamis, meaning “power” (Haynie claims that the In the classical account solely in terms of a cyclic process, word was coined around 1840).[40][41] the spatial interior of the 'working body' of that process is The term thermo-dynamic was first used in January 1849 not considered; the 'working body' thus does not have a deby William Thomson, later Lord Kelvin, in the phrase a per- fined internal thermodynamic state of its own because no asfect thermo-dynamic engine to describe Sadi Carnot’s heat sumption is made that it should be in thermodynamic equiengine.[42]:545 In April 1849, Thomson added an appendix librium; only its inputs and outputs of energy as heat and to his paper and used the term thermodynamic in the phrase work are considered.[46] It is common to describe a cycle the object of a thermodynamic engine.[42]:569 theoretically as composed of a sequence of very many therPierre Perrot claims that the term thermodynamics was modynamic operations and processes. This creates a link to coined by James Joule in 1858 to designate the science of the description in terms of equilibrium states. The cycle is relations between heat and power.[11] Joule, however, never then theoretically described as a continuous progression of used that term, but did use the term perfect thermo-dynamic equilibrium states.

1.1. CLASSICAL THERMODYNAMICS Classical thermodynamics was originally concerned with the transformation of energy in a cyclic process, and the exchange of energy between closed systems defined only by their equilibrium states. The distinction between transfers of energy as heat and as work was central. As classical thermodynamics developed, the distinction between heat and work became less central. This was because there was more interest in open systems, for which the distinction between heat and work is not simple, and is beyond the scope of the present article. Alongside the amount of heat transferred as a fundamental quantity, entropy was gradually found to be a more generally applicable concept, especially when considering chemical reactions. Massieu in 1869 considered entropy as the basic dependent thermodynamic variable, with energy potentials and the reciprocal of the thermodynamic temperature as fundamental independent variables. Massieu functions can be useful in presentday non-equilibrium thermodynamics. In 1875, in the work of Josiah Willard Gibbs, entropy was considered a fundamental independent variable, while internal energy was a dependent variable.[47] All actual physical processes are to some degree irreversible. Classical thermodynamics can consider irreversible processes, but its account in exact terms is restricted to variables that refer only to initial and final states of thermodynamic equilibrium, or to rates of input and output that do not change with time. For example, classical thermodynamics can consider time-average rates of flows generated by continually repeated irreversible cyclic processes. Also it can consider irreversible changes between equilibrium states of systems consisting of several phases (as defined below in this article), or with removable or replaceable partitions. But for systems that are described in terms of equilibrium states, it considers neither flows, nor spatial inhomogeneities in simple systems with no externally imposed force fields such as gravity. In the account in terms of equilibrium states of a system, descriptions of irreversible processes refer only to initial and final static equilibrium states; the time it takes to change thermodynamic state is not considered.[48][49]

Local equilibrium thermodynamics

5 For processes that involve only suitably small and smooth spatial inhomogeneities and suitably small changes with time, a good approximation can be found through the assumption of local thermodynamic equilibrium. Within the large or global region of a process, for a suitably small local region, this approximation assumes that a quantity known as the entropy of the small local region can be defined in a particular way. That particular way of definition of entropy is largely beyond the scope of the present article, but here it may be said that it is entirely derived from the concepts of classical thermodynamics; in particular, neither flow rates nor changes over time are admitted into the definition of the entropy of the small local region. It is assumed without proof that the instantaneous global entropy of a nonequilibrium system can be found by adding up the simultaneous instantaneous entropies of its constituent small local regions. Local equilibrium thermodynamics considers processes that involve the time-dependent production of entropy by dissipative processes, in which kinetic energy of bulk flow and chemical potential energy are converted into internal energy at time-rates that are explicitly accounted for. Time-varying bulk flows and specific diffusional flows are considered, but they are required to be dependent variables, derived only from material properties described only by static macroscopic equilibrium states of small local regions. The independent state variables of a small local region are only those of classical thermodynamics. Generalized or extended thermodynamics Like local equilibrium thermodynamics, generalized or extended thermodynamics also is concerned with the time courses and rates of progress of irreversible processes in systems that are smoothly spatially inhomogeneous. It describes time-varying flows in terms of states of suitably small local regions within a global region that is smoothly spatially inhomogeneous, rather than considering flows as time-invariant long-term-average rates of cyclic processes. In its accounts of processes, generalized or extended thermodynamics admits time as a fundamental quantity in a more far-reaching way than does local equilibrium thermodynamics. The states of small local regions are defined by macroscopic quantities that are explicitly allowed to vary with time, including time-varying flows. Generalized thermodynamics might tackle such problems as ultrasound or shock waves, in which there are strong spatial inhomogeneities and changes in time fast enough to outpace a tendency towards local thermodynamic equilibrium. Generalized or extended thermodynamics is a diverse and developing project, rather than a more or less completed subject such as is classical thermodynamics.[50][51]

Local equilibrium thermodynamics is concerned with the time courses and rates of progress of irreversible processes in systems that are smoothly spatially inhomogeneous. It admits time as a fundamental quantity, but only in a restricted way. Rather than considering time-invariant flows as long-term-average rates of cyclic processes, local equilibrium thermodynamics considers time-varying flows in systems that are described by states of local thermodynamic For generalized or extended thermodynamics, the definiequilibrium, as follows. tion of the quantity known as the entropy of a small local

6 region is in terms beyond those of classical thermodynamics; in particular, flow rates are admitted into the definition of the entropy of a small local region. The independent state variables of a small local region include flow rates, which are not admitted as independent variables for the small local regions of local equilibrium thermodynamics.

CHAPTER 1. CHAPTER 1. INTRODUCTION

equilibrium. In thermodynamic equilibrium, a system’s properties are, by definition, unchanging in time. In thermodynamic equilibrium no macroscopic change is occurring or can be triggered; within the system, every microscopic process is balanced by its opposite; this is called the principle of detailed balance. A central aim in equilibrium Outside the range of classical thermodynamics, the def- thermodynamics is: given a system in a well-defined initial to calculate what the inition of the entropy of a small local region is no sim- state, subject to specified constraints, equilibrium state of the system is.[53] ple matter. For a thermodynamic account of a process in terms of the entropies of small local regions, the defini- In theoretical studies, it is often convenient to consider the tion of entropy should be such as to ensure that the second simplest kind of thermodynamic system. This is defined law of thermodynamics applies in each small local region. variously by different authors.[48][54][55][56][57][58] For the It is often assumed without proof that the instantaneous present article, the following definition is convenient, as global entropy of a non-equilibrium system can be found abstracted from the definitions of various authors. A reby adding up the simultaneous instantaneous entropies of gion of material with all intensive properties continuous in its constituent small local regions. For a given physical space and time is called a phase. A simple system is for the process, the selection of suitable independent local non- present article defined as one that consists of a single phase equilibrium macroscopic state variables for the construc- of a pure chemical substance, with no interior partitions. tion of a thermodynamic description calls for qualitative Within a simple isolated thermodynamic system in thermophysical understanding, rather than being a simply math- dynamic equilibrium, in the absence of externally imposed ematical problem concerned with a uniquely determined force fields, all properties of the material of the system are thermodynamic description. A suitable definition of the spatially homogeneous.[59] Much of the basic theory of therentropy of a small local region depends on the physically modynamics is concerned with homogeneous systems in insightful and judicious selection of the independent local thermodynamic equilibrium.[4][60] non-equilibrium macroscopic state variables, and different selections provide different generalized or extended ther- Most systems found in nature or considered in engineermodynamical accounts of one and the same given physical ing are not in thermodynamic equilibrium, exactly considprocess. This is one of the several good reasons for con- ered. They are changing or can be triggered to change over sidering entropy as an epistemic physical variable, rather time, and are continuously and discontinuously subject to [22] than as a simply material quantity. According to a respected flux of matter and energy to and from other systems. For example, according to Callen, “in absolute thermodyauthor: “There is no compelling reason to believe that the classical thermodynamic entropy is a measurable property namic equilibrium all radioactive materials would have decayed completely and nuclear reactions would have transof nonequilibrium phenomena, ...”[52] muted all nuclei to the most stable isotopes. Such processes, which would take cosmic times to complete, generally can be ignored.”.[22] Such processes being ignored, many sysStatistical thermodynamics tems in nature are close enough to thermodynamic equilibStatistical thermodynamics, also called statistical mechan- rium that for many purposes their behaviour can be well ics, emerged with the development of atomic and molecu- approximated by equilibrium calculations. lar theories in the second half of the 19th century and early 20th century. It provides an explanation of classical thermodynamics. It considers the microscopic interactions between individual particles and their collective motions, in terms of classical or of quantum mechanics. Its explana- Quasi-static transfers between simple systems are tion is in terms of statistics that rest on the fact the sys- nearly in thermodynamic equilibrium and are retem is composed of several species of particles or collective versible motions, the members of each species respectively being in some sense all alike. It very much eases and simplifies theoretical thermodynamical studies to imagine transfers of energy and matter between two simple systems that proceed so slowly that at 1.1.4 Thermodynamic equilibrium all times each simple system considered separately is near enough to thermodynamic equilibrium. Such processes are Equilibrium thermodynamics studies transformations of sometimes called quasi-static and are near enough to being matter and energy in systems at or near thermodynamic reversible.[61][62]

1.1. CLASSICAL THERMODYNAMICS

7

Natural processes are partly described by tendency to- bringing them into contact and measuring any changes of wards thermodynamic equilibrium and are irreversible their observable properties in time.[65] In traditional statements, the law provides an empirical definition of temperIf not initially in thermodynamic equilibrium, simple iso- ature and justification for the construction of practical therlated thermodynamic systems, as time passes, tend to evolve mometers. In contrast to absolute thermodynamic temperanaturally towards thermodynamic equilibrium. In the ab- tures, empirical temperatures are measured just by the mesence of externally imposed force fields, they become ho- chanical properties of bodies, such as their volumes, withmogeneous in all their local properties. Such homogene- out reliance on the concepts of energy, entropy or the first, ity is an important characteristic of a system in thermo- second, or third laws of thermodynamics.[56][66] Empirical dynamic equilibrium in the absence of externally imposed temperatures lead to calorimetry for heat transfer in terms of the mechanical properties of bodies, without reliance on force fields. mechanical concepts of energy. Many thermodynamic processes can be modeled by compound or composite systems, consisting of several or many The physical content of the zeroth law has long been reccontiguous component simple systems, initially not in ther- ognized. For example, Rankine in 1853 defined tempermodynamic equilibrium, but allowed to transfer mass and ature as follows: “Two portions of matter are said to have energy between them. Natural thermodynamic processes equal temperatures when neither tends to communicate heat are described in terms of a tendency towards thermody- to the other.”[67] Maxwell in 1872 stated a “Law of Equal namic equilibrium within simple systems and in transfers Temperatures”.[68] He also stated: “All Heat is of the same between contiguous simple systems. Such natural processes kind.”[69] Planck explicitly assumed and stated it in its customary present-day wording in his formulation of the first are irreversible.[63] two laws.[70] By the time the desire arose to number it as a law, the other three had already been assigned numbers, 1.1.5 Non-equilibrium thermodynamics and so it was designated the zeroth law. Non-equilibrium thermodynamics[64] is a branch of thermodynamics that deals with systems that are not in thermodynamic equilibrium; it is also called thermodynamics of irreversible processes.

1.1.6

Laws of thermodynamics

Main article: Laws of thermodynamics Thermodynamics states a set of four laws that are valid for all systems that fall within the constraints implied by each. In the various theoretical descriptions of thermodynamics these laws may be expressed in seemingly differing forms, but the most prominent formulations are the following:

• First law of thermodynamics: The increase in internal energy of a closed system is equal to the difference of the heat supplied to the system and the work done by the system: ΔU = Q − W [71][72][73][74][75][76][77][78][79][80] (Note that due to the ambiguity of what constitutes positive work, some sources state that ΔU = Q + W, in which case work done on the system is positive.) The first law of thermodynamics asserts the existence of a state variable for a system, the internal energy, and tells how it changes in thermodynamic processes. The law allows a given internal energy of a system to be reached by any combination of heat and work. It is important that internal energy is a variable of state of the system (see Thermodynamic state) whereas heat and work are variables that describe processes or changes of the state of systems.

The first law observes that the internal energy of an iso• Zeroth law of thermodynamics: If two systems are lated system obeys the principle of conservation of eneach in thermal equilibrium with a third, they are also ergy, which states that energy can be transformed (changed in thermal equilibrium with each other. from one form to another), but cannot be created or destroyed.[81][82][83][84][85] This statement implies that thermal equilibrium is an equivalence relation on the set of thermodynamic systems • Second law of thermodynamics: Heat cannot spontaunder consideration. Systems are said to be in thermal equineously flow from a colder location to a hotter location. librium with each other if spontaneous molecular thermal energy exchanges between them do not lead to a net ex- The second law of thermodynamics is an expression of the change of energy. This law is tacitly assumed in every mea- universal principle of dissipation of kinetic and potential surement of temperature. For two bodies known to be at the energy observable in nature. The second law is an obsersame temperature, deciding if they are in thermal equilib- vation of the fact that over time, differences in temperarium when put into thermal contact does not require actually ture, pressure, and chemical potential tend to even out in

8

CHAPTER 1. CHAPTER 1. INTRODUCTION

a physical system that is isolated from the outside world. Entropy is a measure of how much this process has progressed. The entropy of an isolated system that is not in equilibrium tends to increase over time, approaching a maximum value at equilibrium. In classical thermodynamics, the second law is a basic postulate applicable to any system involving heat energy transfer; in statistical thermodynamics, the second law is a consequence of the assumed randomness of molecular chaos. There are many versions of the second law, but they all have the same effect, which is to explain the phenomenon of irreversibility in nature.

under study. Everything in the universe except the system is known as the surroundings. A system is separated from the remainder of the universe by a boundary, which may be actual, or merely notional and fictive, but by convention delimits a finite volume. Transfers of work, heat, or matter between the system and the surroundings take place across this boundary, which may or may not have properties that restrict what can be transferred across it. A system may have several distinct boundary sectors or partitions separating it from the surroundings, each characterized by how it restricts transfers, and being permeable to its characteristic transferred quantities.

The volume can be the region surrounding a single atom res• Third law of thermodynamics: As a system approaches onating energy, as Max Planck defined in 1900; it can be a absolute zero the entropy of the system approaches a body of steam or air in a steam engine, such as Sadi Carnot minimum value. defined in 1824; it can be the body of a tropical cyclone, as Kerry Emanuel theorized in 1986 in the field of atmospheric The third law of thermodynamics is a statistical law of na- thermodynamics; it could also be just one nuclide (i.e. a ture regarding entropy and the impossibility of reaching system of quarks) as hypothesized in quantum thermodyabsolute zero of temperature. This law provides an abso- namics. lute reference point for the determination of entropy. The entropy determined relative to this point is the absolute en- Anything that passes across the boundary needs to be actropy. Alternate definitions of the third law are, “the en- counted for in a proper transfer balance equation. Thermotropy of all systems and of all states of a system is smallest dynamics is largely about such transfers. at absolute zero,” or equivalently “it is impossible to reach Boundary sectors are of various characters: rigid, flexible, the absolute zero of temperature by any finite number of fixed, moveable, actually restrictive, and fictive or not actuprocesses”. ally restrictive. For example, in an engine, a fixed boundAbsolute zero is −273.15 °C (degrees Celsius), −459.67 °F ary sector means the piston is locked at its position; then no pressure-volume work is done across it. In that same (degrees Fahrenheit), 0 K (kelvin), or 0 R (Rankine). engine, a moveable boundary allows the piston to move in and out, permitting pressure-volume work. There is no re1.1.7 System models strictive boundary sector for the whole earth including its atmosphere, and so roughly speaking, no pressure-volume work is done on or by the whole earth system. Such a system is sometimes said to be diabatically heated or cooled SURROUNDINGS by radiation.[86][87] Thermodynamics distinguishes classes of systems by their boundary sectors.

SYSTEM

• An open system has a boundary sector that is permeable to matter; such a sector is usually permeable also to energy, but the energy that passes cannot in general be uniquely sorted into heat and work components. Open system boundaries may be either actually restrictive, or else non-restrictive.

A diagram of a generic thermodynamic system

• A closed system has no boundary sector that is permeable to matter, but in general its boundary is permeable to energy. For closed systems, boundaries are totally prohibitive of matter transfer.

The thermodynamic system is an important concept of thermodynamics. It is a precisely defined region of the universe

• An adiabatically isolated system has only adiabatic boundary sectors. Energy can be transferred as work,

BOUNDARY

1.1. CLASSICAL THERMODYNAMICS

9

but transfers of matter and of energy as heat are pro- cannot be adequately accounted for in terms of equilibrium hibited. states or classical cyclic processes.[93][94] • A purely diathermically isolated system has only boundary sectors permeable only to heat; it is sometimes said to be adynamically isolated and closed to matter transfer. A process in which no work is transferred is sometimes called adynamic.[88] • An isolated system has only isolating boundary sectors. Nothing can be transferred into or out of it. Engineering and natural processes are often described as composites of many different component simple systems, sometimes with unchanging or changing partitions between them. A change of partition is an example of a thermodynamic operation.

1.1.8

States and processes

The notion of a cyclic process does not require a full account of the state of the system, but does require a full account of how the process occasions transfers of matter and energy between the principal system (which is often called the working body) and its surroundings, which must include at least two heat reservoirs at different known and fixed temperatures, one hotter than the principal system and the other colder than it, as well as a reservoir that can receive energy from the system as work and can do work on the system. The reservoirs can alternatively be regarded as auxiliary idealized component systems, alongside the principal system. Thus an account in terms of cyclic processes requires at least four contributory component systems. The independent variables of this account are the amounts of energy that enter and leave the idealized auxiliary systems. In this kind of account, the working body is often regarded as a “black box”,[95] and its own state is not specified. In this approach, the notion of a properly numerical scale of empirical temperature is a presupposition of thermodynamics, not a notion constructed by or derived from it.

There are four fundamental kinds of entity in thermodynamics—states of a system, walls between systems, thermodynamic processes, and thermodynamic operations. This allows three fundamental approaches to thermodynamic reasoning—that in terms of states of thermodynamic equilibrium of a system, and that in terms Account in terms of states of thermodynamic equilibof time-invariant processes of a system, and that in terms rium of cyclic processes of a system. When a system is at thermodynamic equilibrium under a The approach through states of thermodynamic equilibrium given set of conditions of its surroundings, it is said to be in of a system requires a full account of the state of the system a definite thermodynamic state, which is fully described by as well as a notion of process from one state to another of a its state variables. system, but may require only an idealized or partial account of the state of the surroundings of the system or of other If a system is simple as defined above, and is in thermodynamic equilibrium, and is not subject to an externally imsystems. posed force field, such as gravity, electricity, or magnetism, The method of description in terms of states of thermo- then it is homogeneous, that is say, spatially uniform in all dynamic equilibrium has limitations. For example, pro- respects.[96] cesses in a region of turbulent flow, or in a burning gas mixture, or in a Knudsen gas may be beyond “the province of In a sense, a homogeneous system can be regarded as spathermodynamics”.[89][90][91] This problem can sometimes tially zero-dimensional, because it has no spatial variation. be circumvented through the method of description in terms If a system in thermodynamic equilibrium is homogeneous, of cyclic or of time-invariant flow processes. This is part of then its state can be described by a few physical varithe reason why the founders of thermodynamics often pre- ables, which are mostly classifiable as intensive variables ferred the cyclic process description. and extensive variables.[8][33][97][98][99] Approaches through processes of time-invariant flow of a An intensive variable is one that is unchanged with the thersystem are used for some studies. Some processes, for modynamic operation of scaling of a system. example Joule-Thomson expansion, are studied through steady-flow experiments, but can be accounted for by distin- An extensive variable is one that simply scales with the scalguishing the steady bulk flow kinetic energy from the inter- ing of a system, without the further requirement used just nal energy, and thus can be regarded as within the scope of below here, of additivity even when there is inhomogeneity classical thermodynamics defined in terms of equilibrium of the added systems. states or of cyclic processes.[44][92] Other flow processes, for Examples of extensive thermodynamic variables are total example thermoelectric effects, are essentially defined by mass and total volume. Under the above definition, entropy the presence of differential flows or diffusion so that they is also regarded as an extensive variable. Examples of in-

10 tensive thermodynamic variables are temperature, pressure, and chemical concentration; intensive thermodynamic variables are defined at each spatial point and each instant of time in a system. Physical macroscopic variables can be mechanical, material, or thermal.[33] Temperature is a thermal variable; according to Guggenheim, “the most important conception in thermodynamics is temperature.”[8] Intensive variables have the property that if any number of systems, each in its own separate homogeneous thermodynamic equilibrium state, all with the same respective values of all of their intensive variables, regardless of the values of their extensive variables, are laid contiguously with no partition between them, so as to form a new system, then the values of the intensive variables of the new system are the same as those of the separate constituent systems. Such a composite system is in a homogeneous thermodynamic equilibrium. Examples of intensive variables are temperature, chemical concentration, pressure, density of mass, density of internal energy, and, when it can be properly defined, density of entropy.[100] In other words, intensive variables are not altered by the thermodynamic operation of scaling. For the immediately present account just below, an alternative definition of extensive variables is considered, that requires that if any number of systems, regardless of their possible separate thermodynamic equilibrium or nonequilibrium states or intensive variables, are laid side by side with no partition between them so as to form a new system, then the values of the extensive variables of the new system are the sums of the values of the respective extensive variables of the individual separate constituent systems. Obviously, there is no reason to expect such a composite system to be in a homogeneous thermodynamic equilibrium. Examples of extensive variables in this alternative definition are mass, volume, and internal energy. They depend on the total quantity of mass in the system.[101] In other words, although extensive variables scale with the system under the thermodynamic operation of scaling, nevertheless the present alternative definition of an extensive variable requires more than this: it requires also its additivity regardless of the inhomogeneity (or equality or inequality of the values of the intensive variables) of the component systems. Though, when it can be properly defined, density of entropy is an intensive variable, for inhomogeneous systems, entropy itself does not fit into this alternative classification of state variables.[102][103] The reason is that entropy is a property of a system as a whole, and not necessarily related simply to its constituents separately. It is true that for any number of systems each in its own separate homogeneous thermodynamic equilibrium, all with the same values of intensive variables, removal of the partitions between the separate systems results in a composite homogeneous system in

CHAPTER 1. CHAPTER 1. INTRODUCTION thermodynamic equilibrium, with all the values of its intensive variables the same as those of the constituent systems, and it is reservedly or conditionally true that the entropy of such a restrictively defined composite system is the sum of the entropies of the constituent systems. But if the constituent systems do not satisfy these restrictive conditions, the entropy of a composite system cannot be expected to be the sum of the entropies of the constituent systems, because the entropy is a property of the composite system as a whole. Therefore, though under these restrictive reservations, entropy satisfies some requirements for extensivity defined just above, entropy in general does not fit the immediately present definition of an extensive variable. Being neither an intensive variable nor an extensive variable according to the immediately present definition, entropy is thus a stand-out variable, because it is a state variable of a system as a whole.[102] A non-equilibrium system can have a very inhomogeneous dynamical structure. This is one reason for distinguishing the study of equilibrium thermodynamics from the study of non-equilibrium thermodynamics. The physical reason for the existence of extensive variables is the time-invariance of volume in a given inertial reference frame, and the strictly local conservation of mass, momentum, angular momentum, and energy. As noted by Gibbs, entropy is unlike energy and mass, because it is not locally conserved.[102] The stand-out quantity entropy is never conserved in real physical processes; all real physical processes are irreversible.[104] The motion of planets seems reversible on a short time scale (millions of years), but their motion, according to Newton’s laws, is mathematically an example of deterministic chaos. Eventually a planet suffers an unpredictable collision with an object from its surroundings, outer space in this case, and consequently its future course is radically unpredictable. Theoretically this can be expressed by saying that every natural process dissipates some information from the predictable part of its activity into the unpredictable part. The predictable part is expressed in the generalized mechanical variables, and the unpredictable part in heat. Other state variables can be regarded as conditionally 'extensive' subject to reservation as above, but not extensive as defined above. Examples are the Gibbs free energy, the Helmholtz free energy, and the enthalpy. Consequently, just because for some systems under particular conditions of their surroundings such state variables are conditionally conjugate to intensive variables, such conjugacy does not make such state variables extensive as defined above. This is another reason for distinguishing the study of equilibrium thermodynamics from the study of non-equilibrium thermodynamics. In another way of thinking, this explains why heat is to be regarded as a quantity that refers to a process and not to a state of a system.

1.1. CLASSICAL THERMODYNAMICS A system with no internal partitions, and in thermodynamic equilibrium, can be inhomogeneous in the following respect: it can consist of several so-called 'phases’, each homogeneous in itself, in immediate contiguity with other phases of the system, but distinguishable by their having various respectively different physical characters, with discontinuity of intensive variables at the boundaries between the phases; a mixture of different chemical species is considered homogeneous for this purpose if it is physically homogeneous.[105] For example, a vessel can contain a system consisting of water vapour overlying liquid water; then there is a vapour phase and a liquid phase, each homogeneous in itself, but still in thermodynamic equilibrium with the other phase. For the immediately present account, systems with multiple phases are not considered, though for many thermodynamic questions, multiphase systems are important.

11 other independent variable, and then changes in volume are considered as dependent. Careful attention to this principle is necessary in thermodynamics.[107][108]

Changes of state of a system In the approach through equilibrium states of the system, a process can be described in two main ways.

In one way, the system is considered to be connected to the surroundings by some kind of more or less separating partition, and allowed to reach equilibrium with the surroundings with that partition in place. Then, while the separative character of the partition is kept unchanged, the conditions of the surroundings are changed, and exert their influence on the system again through the separating partition, or the partition is moved so as to change the volume of the system; and a new equilibrium is reached. For example, a system is allowed to reach equilibrium with a heat bath at one temperature; then the temperature of the heat bath is changed Equation of state The macroscopic variables of a therand the system is allowed to reach a new equilibrium; if the modynamic system in thermodynamic equilibrium, in partition allows conduction of heat, the new equilibrium is which temperature is well defined, can be related to different from the old equilibrium. one another through equations of state or characteristic [29][30][31][32] equations. They express the constitutive pe- In the other way, several systems are connected to one anculiarities of the material of the system. The equation of other by various kinds of more or less separating partitions, state must comply with some thermodynamic constraints, and to reach equilibrium with each other, with those partibut cannot be derived from the general principles of ther- tions in place. In this way, one may speak of a 'compound system'. Then one or more partitions is removed or changed modynamics alone. in its separative properties or moved, and a new equilibrium is reached. The Joule-Thomson experiment is an example Thermodynamic processes between states of thermody- of this; a tube of gas is separated from another tube by a namic equilibrium porous partition; the volume available in each of the tubes is determined by respective pistons; equilibrium is established A thermodynamic process is defined by changes of state in- with an initial set of volumes; the volumes are changed and ternal to the system of interest, combined with transfers of a new equilibrium is established.[109][110][111][112][113] Anmatter and energy to and from the surroundings of the sys- other example is in separation and mixing of gases, with tem or to and from other systems. A system is demarcated use of chemically semi-permeable membranes.[114] from its surroundings or from other systems by partitions that more or less separate them, and may move as a piston to change the volume of the system and thus transfer work. Commonly considered thermodynamic processes It is often convenient to study a thermodynamic process in which a single variable, such as temperature, pressure, or Dependent and independent variables for a process A volume, etc., is held fixed. Furthermore, it is useful to group process is described by changes in values of state variables these processes into pairs, in which each variable held conof systems or by quantities of exchange of matter and en- stant is one member of a conjugate pair. ergy between systems and surroundings. The change must be specified in terms of prescribed variables. The choice Several commonly studied thermodynamic processes of which variables are to be used is made in advance of are: consideration of the course of the process, and cannot be changed. Certain of the variables chosen in advance are • Isobaric process: occurs at constant pressure called the independent variables.[106] From changes in independent variables may be derived changes in other vari• Isochoric process: occurs at constant volume (also ables called dependent variables. For example, a process called isometric/isovolumetric) may occur at constant pressure with pressure prescribed as • Isothermal process: occurs at a constant temperature an independent variable, and temperature changed as an-

12

CHAPTER 1. CHAPTER 1. INTRODUCTION

• Adiabatic process: occurs without loss or gain of en- A cyclic process of a system requires in its surroundings ergy as heat at least two heat reservoirs at different temperatures, one at a higher temperature that supplies heat to the system, • Isentropic process: a reversible adiabatic process oc- the other at a lower temperature that accepts heat from curs at a constant entropy, but is a fictional idealization. the system. The early work on thermodynamics tended to Conceptually it is possible to actually physically con- use the cyclic process approach, because it was interested duct a process that keeps the entropy of the system in machines that converted some of the heat from the surconstant, allowing systematically controlled removal roundings into mechanical power delivered to the surroundof heat, by conduction to a cooler body, to compensate ings, without too much concern about the internal workfor entropy produced within the system by irreversible ings of the machine. Such a machine, while receiving an work done on the system. Such isentropic conduct of a amount of heat from a higher temperature reservoir, alprocess seems called for when the entropy of the sys- ways needs a lower temperature reservoir that accepts some tem is considered as an independent variable, as for lesser amount of heat. The difference in amounts of heat example when the internal energy is considered as a is equal to the amount of heat converted to work.[83] Later, function of the entropy and volume of the system, the the internal workings of a system became of interest, and natural variables of the internal energy as studied by they are described by the states of the system. Nowadays, Gibbs. instead of arguing in terms of cyclic processes, some writers are inclined to derive the concept of absolute temperature • Isenthalpic process: occurs at a constant enthalpy from the concept of entropy, a variable of state. • Isolated process: no matter or energy (neither as work nor as heat) is transferred into or out of the system It is sometimes of interest to study a process in which several variables are controlled, subject to some specified constraint. In a system in which a chemical reaction can occur, for example, in which the pressure and temperature can affect the equilibrium composition, a process might occur in which temperature is held constant but pressure is slowly altered, just so that chemical equilibrium is maintained all the way. There is a corresponding process at constant temperature in which the final pressure is the same but is reached by a rapid jump. Then it can be shown that the volume change resulting from the rapid jump process is smaller than that from the slow equilibrium process.[115] The work transferred differs between the two processes. Account in terms of cyclic processes A cyclic process[26] is a process that can be repeated indefinitely often without changing the final state of the system in which the process occurs. The only traces of the effects of a cyclic process are to be found in the surroundings of the system or in other systems. This is the kind of process that concerned early thermodynamicists such as Sadi Carnot, and in terms of which Kelvin defined absolute temperature,[116] before the use of the quantity of entropy by Rankine and its clear identification by Clausius.[117] For some systems, for example with some plastic working substances, cyclic processes are practically nearly unfeasible because the working substance undergoes practically irreversible changes.[118] This is why mechanical devices are lubricated with oil and one of the reasons why electrical devices are often useful.

1.1.9

Instrumentation

There are two types of thermodynamic instruments, the meter and the reservoir. A thermodynamic meter is any device that measures any parameter of a thermodynamic system. In some cases, the thermodynamic parameter is actually defined in terms of an idealized measuring instrument. For example, the zeroth law states that if two bodies are in thermal equilibrium with a third body, they are also in thermal equilibrium with each other. This principle, as noted by James Maxwell in 1872, asserts that it is possible to measure temperature. An idealized thermometer is a sample of an ideal gas at constant pressure. From the ideal gas law PV=nRT, the volume of such a sample can be used as an indicator of temperature; in this manner it defines temperature. Although pressure is defined mechanically, a pressure-measuring device, called a barometer may also be constructed from a sample of an ideal gas held at a constant temperature. A calorimeter is a device that measures and define the internal energy of a system. A thermodynamic reservoir is a system so large that it does not appreciably alter its state parameters when brought into contact with the test system. It is used to impose a particular value of a state parameter upon the system. For example, a pressure reservoir is a system at a particular pressure, which imposes that pressure upon any test system that it is mechanically connected to. The Earth’s atmosphere is often used as a pressure reservoir.

1.1. CLASSICAL THERMODYNAMICS

1.1.10

Conjugate variables

Main article: Conjugate variables

13 times without,[120][121] explicit mention. Particular attention is paid to the law in accounts of non-equilibrium thermodynamics.[122][123] One statement of this law is “The total mass of a closed system remains constant.”[9] Another statement of it is “In a chemical reaction, matter is neither created nor destroyed.”[124] Implied in this is that matter and energy are not considered to be interconverted in such accounts. The full generality of the law of conservation of energy is thus not used in such accounts.

A central concept of thermodynamics is that of energy. By the First Law, the total energy of a system and its surroundings is conserved. Energy may be transferred into a system by heating, compression, or addition of matter, and extracted from a system by cooling, expansion, or extraction of matter. In mechanics, for example, energy transfer In 1909, Constantin Carathéodory presented[56] a purely equals the product of the force applied to a body and the mathematical axiomatic formulation, a description often resulting displacement. referred to as geometrical thermodynamics, and sometimes [79] Conjugate variables are pairs of thermodynamic concepts, said to take the “mechanical approach” to thermodynamwith the first being akin to a “force” applied to some ics. The Carathéodory formulation is restricted to equilibthermodynamic system, the second being akin to the re- rium thermodynamics and does not attempt to deal with at a dissulting “displacement,” and the product of the two equalling non-equilibrium thermodynamics, forces that act [125] tance on the system, or surface tension effects. Morethe amount of energy transferred. The common conjugate over, Carathéodory’s formulation does not deal with matevariables are: rials like water near 4 °C, which have a density extremum as a function of temperature at constant pressure.[126][127] • Pressure-volume (the mechanical parameters); Carathéodory used the law of conservation of energy as an axiom from which, along with the contents of • Temperature-entropy (thermal parameters); the zeroth law, and some other assumptions including • Chemical potential-particle number (material param- his own version of the second law, he derived the first law of thermodynamics.[128] Consequently, one might also eters). describe Carathėodory’s work as lying in the field of energetics,[129] which is broader than thermodynamics. 1.1.11 Potentials Carathéodory presupposed the law of conservation of mass without explicit mention of it. Thermodynamic potentials are different quantitative meaSince the time of Carathėodory, other influential axiomatic sures of the stored energy in a system. Potentials are used formulations of thermodynamics have appeared, which like to measure energy changes in systems as they evolve from Carathéodory’s, use their own respective axioms, different an initial state to a final state. The potential used depends from the usual statements of the four laws, to derive the four on the constraints of the system, such as constant temperusually stated laws.[130][131][132] ature or pressure. For example, the Helmholtz and Gibbs energies are the energies available in a system to do useful Many axiomatic developments assume the existence of work when the temperature and volume or the pressure and states of thermodynamic equilibrium and of states of thermal equilibrium. States of thermodynamic equilibrium of temperature are fixed, respectively. compound systems allow their component simple systems The five most well known potentials are: to exchange heat and matter and to do work on each other where T is the temperature, S the entropy, p the pressure, on their way to overall joint equilibrium. Thermal equiV the volume, µ the chemical potential, N the number of librium allows them only to exchange heat. The physical particles in the system, and i is the count of particles types properties of glass depend on its history of being heated and in the system. cooled and, strictly speaking, glass is not in thermodynamic [133] Thermodynamic potentials can be derived from the en- equilibrium. ergy balance equation applied to a thermodynamic sys- According to Herbert Callen's widely cited 1985 text on tem. Other thermodynamic potentials can also be obtained thermodynamics: “An essential prerequisite for the meathrough Legendre transformation. surability of energy is the existence of walls that do not permit transfer of energy in the form of heat.”.[134] According to Werner Heisenberg's mature and careful examination of 1.1.12 Axiomatics the basic concepts of physics, the theory of heat has a selfstanding place.[135] Most accounts of thermodynamics presuppose the law of conservation of mass, sometimes with,[119] and some- From the viewpoint of the axiomatist, there are several dif-

14 ferent ways of thinking about heat, temperature, and the second law of thermodynamics. The Clausius way rests on the empirical fact that heat is conducted always down, never up, a temperature gradient. The Kelvin way is to assert the empirical fact that conversion of heat into work by cyclic processes is never perfectly efficient. A more mathematical way is to assert the existence of a function of state called the entropy that tells whether a hypothesized process occurs spontaneously in nature. A more abstract way is that of Carathéodory that in effect asserts the irreversibility of some adiabatic processes. For these different ways, there are respective corresponding different ways of viewing heat and temperature. The Clausius–Kelvin–Planck way This way prefers ideas close to the empirical origins of thermodynamics. It presupposes transfer of energy as heat, and empirical temperature as a scalar function of state. According to Gislason and Craig (2005): “Most thermodynamic data come from calorimetry...”[136] According to Kondepudi (2008): “Calorimetry is widely used in present day laboratories.”[137] In this approach, what is often currently called the zeroth law of thermodynamics is deduced as a simple consequence of the presupposition of the nature of heat and empirical temperature, but it is not named as a numbered law of thermodynamics. Planck attributed this point of view to Clausius, Kelvin, and Maxwell. Planck wrote (on page 90 of the seventh edition, dated 1922, of his treatise) that he thought that no proof of the second law of thermodynamics could ever work that was not based on the impossibility of a perpetual motion machine of the second kind. In that treatise, Planck makes no mention of the 1909 Carathéodory way, which was well known by 1922. Planck for himself chose a version of what is just above called the Kelvin way.[138] The development by Truesdell and Bharatha (1977) is so constructed that it can deal naturally with cases like that of water near 4 °C.[131] The way that assumes the existence of entropy as a function of state This way also presupposes transfer of energy as heat, and it presupposes the usually stated form of the zeroth law of thermodynamics, and from these two it deduces the existence of empirical temperature. Then from the existence of entropy it deduces the existence of absolute thermodynamic temperature.[8][130] The Carathéodory way This way presupposes that the state of a simple one-phase system is fully specifiable by just one more state variable than the known exhaustive list of mechanical variables of state. It does not explicitly name empirical temperature, but speaks of the onedimensional “non-deformation coordinate”. This satisfies the definition of an empirical temperature, that lies on a one-dimensional manifold. The Carathéodory way needs to assume moreover that the one-dimensional manifold has a definite sense, which determines the direction of irre-

CHAPTER 1. CHAPTER 1. INTRODUCTION versible adiabatic process, which is effectively assuming that heat is conducted from hot to cold. This way presupposes the often currently stated version of the zeroth law, but does not actually name it as one of its axioms.[125] According to one author, Carathéodory’s principle, which is his version of the second law of thermodynamics, does not imply the increase of entropy when work is done under adiabatic conditions (as was noted by Planck[139] ). Thus Carathéodory’s way leaves unstated a further empirical fact that is needed for a full expression of the second law of thermodynamics.[140]

1.1.13

Scope of thermodynamics

Originally thermodynamics concerned material and radiative phenomena that are experimentally reproducible. For example, a state of thermodynamic equilibrium is a steady state reached after a system has aged so that it no longer changes with the passage of time. But more than that, for thermodynamics, a system, defined by its being prepared in a certain way must, consequent on every particular occasion of preparation, upon aging, reach one and the same eventual state of thermodynamic equilibrium, entirely determined by the way of preparation. Such reproducibility is because the systems consist of so many molecules that the molecular variations between particular occasions of preparation have negligible or scarcely discernable effects on the macroscopic variables that are used in thermodynamic descriptions. This led to Boltzmann’s discovery that entropy had a statistical or probabilistic nature. Probabilistic and statistical explanations arise from the experimental reproducibility of the phenomena.[141] Gradually, the laws of thermodynamics came to be used to explain phenomena that occur outside the experimental laboratory. For example, phenomena on the scale of the earth’s atmosphere cannot be reproduced in a laboratory experiment. But processes in the atmosphere can be modeled by use of thermodynamic ideas, extended well beyond the scope of laboratory equilibrium thermodynamics.[142][143][144] A parcel of air can, near enough for many studies, be considered as a closed thermodynamic system, one that is allowed to move over significant distances. The pressure exerted by the surrounding air on the lower face of a parcel of air may differ from that on its upper face. If this results in rising of the parcel of air, it can be considered to have gained potential energy as a result of work being done on it by the combined surrounding air below and above it. As it rises, such a parcel usually expands because the pressure is lower at the higher altitudes that it reaches. In that way, the rising parcel also does work on the surrounding atmosphere. For many studies, such a parcel can be considered nearly to neither gain nor lose energy by heat conduction to its surrounding at-

1.1. CLASSICAL THERMODYNAMICS

15

mosphere, and its rise is rapid enough to leave negligible Wikibooks time for it to gain or lose heat by radiation; consequently • Engineering Thermodynamics the rising of the parcel is near enough adiabatic. Thus the adiabatic gas law accounts for its internal state variables, • Entropy for Beginners provided that there is no precipitation into water droplets, no evaporation of water droplets, and no sublimation in the process. More precisely, the rising of the parcel is likely 1.1.16 References to occasion friction and turbulence, so that some potential and some kinetic energy of bulk converts into internal en- [1] Clausius, Rudolf (1850). On the Motive Power of Heat, and ergy of air considered as effectively stationary. Friction and on the Laws which can be deduced from it for the Theory of Heat. Poggendorff’s Annalen der Physik, LXXIX (Dover turbulence thus oppose the rising of the parcel.[145][146] Reprint). ISBN 0-486-59065-8.

1.1.14

Applied fields

• Atmospheric thermodynamics • Biological thermodynamics • Black hole thermodynamics • Chemical thermodynamics • Equilibrium thermodynamics • Geology • Industrial ecology (re: Exergy) • Maximum entropy thermodynamics • Non-equilibrium thermodynamics • Philosophy of thermal and statistical physics • Psychrometrics • Quantum thermodynamics • Statistical thermodynamics • Thermoeconomics

1.1.15

See also

Entropy production Lists and timelines • List of important publications in thermodynamics • List of textbooks in statistical mechanics • List of thermal conductivities • List of thermodynamic properties

[2] Thomson, W. (1854). “On the Dynamical Theory of Heat. Part V. Thermo-electric Currents”. Transactions of the Royal Society of Edinburgh 21 (part I): 123. doi:10.1017/s0080456800032014. reprinted in Sir William Thomson, LL.D. D.C.L., F.R.S. (1882). Mathematical and Physical Papers 1. London, Cambridge: C.J. Clay, M.A. & Son, Cambridge University Press. p. 232. Hence Thermodynamics falls naturally into two divisions, of which the subjects are respectively, the relation of heat to the forces acting between contiguous parts of bodies, and the relation of heat to electrical agency. [3] Hess, H. (1840). Thermochemische Untersuchungen, Annalen der Physik und Chemie (Poggendorff, Leipzig) 126(6): 385–404. [4] Gibbs, Willard, J. (1876). Transactions of the Connecticut Academy, III, pp. 108–248, Oct. 1875 – May 1876, and pp. 343–524, May 1877 – July 1878. [5] Duhem, P.M.M. (1886). Le Potential Thermodynamique et ses Applications, Hermann, Paris. [6] Lewis, Gilbert N.; Randall, Merle (1923). Thermodynamics and the Free Energy of Chemical Substances. McGraw-Hill Book Co. Inc. [7] Guggenheim, E.A. (1933). Modern Thermodynamics by the Methods of J.W. Gibbs, Methuen, London. [8] Guggenheim, E.A. (1949/1967) [9] Ilya Prigogine, I. & Defay, R., translated by D.H. Everett (1954). Chemical Thermodynamics. Longmans, Green & Co., London. Includes classical non-equilibrium thermodynamics. [10] Enrico Fermi (1956). Thermodynamics. Courier Dover Publications. p. ix. ISBN 0-486-60361-X. OCLC 230763036. [11] Perrot, Pierre (1998). A to Z of Thermodynamics. Oxford University Press. ISBN 0-19-856552-6. OCLC 123283342.

• Table of thermodynamic equations

[12] Bridgman, P.W. (1943). The Nature of Thermodynamics, Harvard University Press, Cambridge MA, p. 48.

• Timeline of thermodynamics

[13] Partington, J.R. (1949), page 118.

16

CHAPTER 1. CHAPTER 1. INTRODUCTION

[14] Reif, F. (1965). Fundamentals of Statistical and Thermal Physics, McGraw-Hill Book Company, New York, page 122.

[39] Gibbs, Willard (1993). The Scientific Papers of J. Willard Gibbs, Volume One: Thermodynamics. Ox Bow Press. ISBN 0-918024-77-3. OCLC 27974820.

[15] Fowler, R., Guggenheim, E.A. (1939), p. 3.

[40] Oxford English Dictionary, Oxford University Press, Oxford UK.

[16] Tisza, L. (1966), p. 18. [17] Marsland, R. III, Brown, H.R., Valente, G. (2015). [18] Adkins, C.J. (1968/1983), p. 4. [19] Born, M. (1949), p. 44. [20] Guggenheim, E.A. (1949/1967), pp. 7–8. [21] Tisza, L. (1966), pp. 109, 112. [22] Callen, p. 15. [23] Bailyn, M. (1994), p. 21. [24] Callen, H.B. (1960/1985), p. 427. [25] Tisza, L. (1966), pp. 41, 109, 121, originally published as 'The thermodynamics of phase equilibrium', Annals of Physics, 13: 1–92. [26] Serrin, J. (1986). Chapter 1, 'An Outline of Thermodynamical Structure', pp. 3–32, especially p. 8, in Serrin, J. (1986). [27] Fowler, R., Guggenheim, E.A. (1939), p. 13.

[41] Donald T. Haynie (2001). Biological Thermodynamics (2 ed.). Cambridge University Press. p. 22. [42] Thomson, W. (1849). “An Account of Carnot’s Theory of the Motive Power of Heat; with Numerical Results deduced from Regnault’s Experiments on Steam”. Transactions of the Royal Society of Edinburgh 16 (part V): 541–574. doi:10.1017/s0080456800022481. [43] Rankine, William (1859). “3: Principles of Thermodynamics”. A Manual of the Steam Engine and other Prime Movers. London: Charles Griffin and Co. pp. 299–448. [44] Pippard, A.B. (1957), p. 70. [45] Partington, J.R. (1949), p. 615–621. [46] Serrin, J. (1986). An outline of thermodynamical structure, Chapter 1, pp. 3–32 in Serrin, J. (1986). [47] Callen, H.B. (1960/1985), Chapter 6, pages 131–152. [48] Callen, H.B. (1960/1985), p. 13.

[28] Tisza, L. (1966), pp. 79–80.

[49] Landsberg, P.T. (1978). Thermodynamics and Statistical Mechanics, Oxford University Press, Oxford UK, ISBN 019-851142-6, p. 1.

[29] Planck, M. 1923/1926, page 5.

[50] Eu, B.C. (2002).

[30] Partington, p. 121.

[51] Lebon, G., Jou, D., Casas-Vázquez, J. (2008).

[31] Adkins, pp. 19–20.

[52] Grandy, W.T., Jr (2008), passim and p. 123.

[32] Haase, R. (1971), pages 11–16.

[53] Callen, H.B. (1985), p. 26.

[33] Balescu, R. (1975). Equilibrium and Nonequilibrium Statistical Mechanics, Wiley-Interscience, New York, ISBN 0-47104600-0.

[54] Gibbs J.W. (1875), pp. 115–116.

[34] Schrödinger, E. (1946/1967). Statistical Thermodynamics. A Course of Seminar Lectures, Cambridge University Press, Cambridge UK.

[56] C. Carathéodory (1909). “Untersuchungen über die Grundlagen der Thermodynamik”. Mathematische Annalen 67: 355–386. A partly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA. doi:10.1007/BF01450409.

[35] Partington, J.R. (1949), p. 551. [36] Partington, J.R. (1989). A Short History of Chemistry. Dover. OCLC 19353301.

[55] Bryan, G.H. (1907), p. 5.

[57] Haase, R. (1971), p. 13.

[37] The Newcomen engine was improved from 1711 until Watt’s work, making the efficiency comparison subject to qualification, but the increase from the Newcomen 1765 version was on the order of 100%.

[58] Bailyn, M. (1994), p. 145.

[38] Cengel, Yunus A.; Boles, Michael A. (2005). Thermodynamics – an Engineering Approach. McGraw-Hill. ISBN 0-07-310768-9.

[61] Partington, J.R. (1949), p. 129.

[59] Bailyn, M. (1994), Section 6.11. [60] Planck, M. (1897/1903), passim.

[62] Callen, H.B. (1960/1985), Section 4–2.

1.1. CLASSICAL THERMODYNAMICS

[63] Guggenheim, E.A. (1949/1967), §1.12. [64] de Groot, S.R., Mazur, P., Non-equilibrium thermodynamics,1969, North-Holland Publishing Company, AmsterdamLondon [65] Moran, Michael J. and Howard N. Shapiro, 2008. Fundamentals of Engineering Thermodynamics. 6th ed. Wiley and Sons: 16. [66] Planck, M. (1897/1903), p. 1. [67] Rankine, W.J.M. (1953). Proc. Roy. Soc. (Edin.), 20(4). [68] Maxwell, J.C. (1872), page 32. [69] Maxwell, J.C. (1872), page 57. [70] Planck, M. (1897/1903), pp. 1–2. [71] Clausius, R. (1850). Ueber de bewegende Kraft der Wärme und die Gesetze, welche sich daraus für de Wärmelehre selbst ableiten lassen, Annalen der Physik und Chemie, 155 (3): 368–394.

17

[86] Goody, R.M., Yung, Y.L. (1989). Atmospheric Radiation. Theoretical Basis, second edition, Oxford University Press, Oxford UK, ISBN 0-19-505134-3, p. 5 [87] Wallace, J.M., Hobbs, P.V. (2006). Atmospheric Science. An Introductory Survey, second edition, Elsevier, Amsterdam, ISBN 978-0-12-732951-2, p. 292. [88] Partington, J.R. (1913). A Text-book of Thermodynamics, Van Nostrand, New York, page 37. [89] Glansdorff, P., Prigogine, I., (1971). Thermodynamic Theory of Structure, Stability and Fluctuations, WileyInterscience, London, ISBN 0-471-30280-5, page 15. [90] Haase, R., (1971), page 16. [91] Eu, B.C. (2002), p. 13. [92] Adkins, C.J. (1968/1975), pp. 46–49. [93] Adkins, C.J. (1968/1975), p. 172. [94] Lebon, G., Jou, D., Casas-Vázquez, J. (2008), pp. 37–38.

[72] Rankine, W.J.M. (1850). On the mechanical action of heat, especially in gases and vapours. Trans. Roy. Soc. Edinburgh, 20: 147–190.

[95] Buchdahl, H.A. (1966). The Concepts of Classical Thermodynamics, Cambridge University Press, London, pp. 117– 118.

[73] Helmholtz, H. von. (1897/1903). Vorlesungen über Theorie der Wärme, edited by F. Richarz, Press of Johann Ambrosius Barth, Leipzig, Section 46, pp. 176–182, in German.

[96] Guggenheim, E.A. (1949/1967), p. 6.

[74] Planck, M. (1897/1903), p. 43. [75] Guggenheim, E.A. (1949/1967), p. 10. [76] Sommerfeld, A. (1952/1956), Section 4 A, pp. 13–16. [77] Ilya Prigogine, I. & Defay, R., translated by D.H. Everett (1954). Chemical Thermodynamics. Longmans, Green & Co., London, p. 21.

[97] Balescu, R. (1975). Equilibrium and Non-equilibrium Statistical Mechanics, Wiley-Interscience, New York, ISBN 0471-04600-0, Section 3.2, pp. 64–72. [98] Ilya Prigogine, I. & Defay, R., translated by D.H. Everett (1954). Chemical Thermodynamics. Longmans, Green & Co., London. pp. 1–6. [99] Lavenda, B.H. (1978). Thermodynamics of Irreversible Processes, Macmillan, London, ISBN 0-333-21616-4, p. 12.

[100] [78] Lewis, G.N., Randall, M. (1961). Thermodynamics, second edition revised by K.S. Pitzer and L. Brewer, McGraw-Hill, [101] New York, p. 35. [102] [79] Bailyn, M. (1994), page 79. [103] [80] Khanna, F.C., Malbouisson, A.P.C., Malbouisson, J.M.C., [104] Santana, A.E. (2009). Thermal Quantum Field Theory. Algebraic Aspects and Applications, World Scientific, Singa- [105] pore, ISBN 978-981-281-887-4, p. 6. [106] [81] Helmholtz, H. von, (1847). Ueber die Erhaltung der Kraft, G. Reimer, Berlin. [107]

Guggenheim, E.A. (1949/1967), p. 19. Guggenheim, E.A. (1949/1967), pp. 18–19. Grandy, W.T., Jr (2008), Chapter 5, pp. 59–68. Kondepudi & Prigogine (1998), pp. 116–118. Guggenheim, E.A. (1949/1967), Section 1.12, pp. 12–13. Planck, M. (1897/1903), p. 65. Planck, M. (1923/1926), Section 152A, pp. 121–123. Prigogine, I. Defay, R. (1950/1954). Chemical Thermodynamics, Longmans, Green & Co., London, p. 1.

[82] Joule, J.P. (1847). On matter, living force, and heat, Manchester Courier, 5 and 12 May 1847. [108] Adkins, pp. 43–46. [83] Truesdell, C.A. (1980).

[109] Planck, M. (1897/1903), Section 70, pp. 48–50.

[84] Partington, J.R. (1949), page 150.

[110] Guggenheim, E.A. (1949/1967), Section 3.11, pp. 92–92.

[85] Kondepudi & Prigogine (1998), pp. 31–32.

[111] Sommerfeld, A. (1952/1956), Section 1.5 C, pp. 23–25.

18

CHAPTER 1. CHAPTER 1. INTRODUCTION

[112] Callen, H.B. (1960/1985), Section 6.3. [113] Adkins, pp. 164–168.

[132] Wright, P.G. (1980). Conceptually distinct types of thermodynamics, Eur. J. Phys. 1: 81–84. [133] Callen, H.B. (1960/1985), p. 14.

[114] Planck, M. (1897/1903), Section 236, pp. 211–212. [134] Callen, H.B. (1960/1985), p. 16. [115] Ilya Prigogine, I. & Defay, R., translated by D.H. Everett (1954). Chemical Thermodynamics. Longmans, Green & [135] Heisenberg, W. (1958). Physics and Philosophy, Harper & Row, New York, pp. 98–99. Co., London, Chapters 18–19. [136] Gislason, E.A., Craig, N.C. (2005). Cementing the foundations of thermodynamics:comparison of system-based and surroundings-based definitions of work and heat, J. Chem. Truesdell, C.A. (1980), Sections 8G,8H, 9A, pp. 207–224. Thermodynamics 37: 954–966. Ziegler, H., (1983). An Introduction to Thermomechanics, [137] Kondepudi, D. (2008). Introduction to Modern ThermodyNorth-Holland, Amsterdam, ISBN 0-444-86503-9 namics, Wiley, Chichester, ISBN 978-0-470-01598-8, p. 63. Ziegler, H. (1977). An Introduction to Thermomechanics, North-Holland, Amsterdam, ISBN 0-7204-0432-0. [138] Planck, M. (1922/1927). Planck M. (1922/1927). [139] Planck, M. (1926). Über die Begründung des zweiten Hauptsatzes der Thermodynamik, Sitzungsberichte der Guggenheim, E.A. (1949/1967). Preußischen Akademie der Wissenschaften, physikalischmathematischen Klasse, pp. 453–463. de Groot, S.R., Mazur, P. (1962). Non-equilibrium Thermodynamics, North Holland, Amsterdam. [140] Münster, A. (1970). Classical Thermodynamics, translated by E.S. Halberstadt, Wiley–Interscience, London, ISBN 0Gyarmati, I. (1970). Non-equilibrium Thermodynamics, 471-62430-6, p 41. translated into English by E. Gyarmati and W.F. Heinz, Springer, New York. [141] Grandy, W.T., Jr (2008). Entropy and the Time Evolution of Macroscopic Systems, Oxford University Press, Oxford UK, Tro, N.J. (2008). Chemistry. A Molecular Approach, PearISBN 978-0-19-954617-6. p. 49. son Prentice-Hall, Upper Saddle River NJ, ISBN 0-13100065-9. [142] Iribarne, J.V., Godson, W.L. (1973/1989). Atmospheric thermodynamics, second edition, reprinted 1989, Kluwer Turner, L.A. (1962). Simplification of Carathéodory’s treatAcademic Publishers, Dordrecht, ISBN 90-277-1296-4. ment of thermodynamics, Am. J. Phys. 30: 781–786.

[116] Truesdell, C.A. (1980), Section 11B, pp. 306–310. [117] [118] [119] [120] [121] [122] [123]

[124]

[125]

[126] Turner, L.A. (1962). Further remarks on the zeroth law, [143] Peixoto, J.P., Oort, A.H. (1992). Physics of climate, American Institute of Physics, New York, ISBN 0-88318-712-4 Am. J. Phys. 30: 804–806. [127] Thomsen, J.S., Hartka, T.J., (1962). Strange Carnot cycles; thermodynamics of a system with a density maximum, Am. J. Phys. 30: 26–33, 30: 388–389.

[144] North, G.R., Erukhimova, T.L. (2009). Atmospheric Thermodynamics. Elementary Physics and Chemistry, Cambridge University Press, Cambridge UK, ISBN 978-0-521-899635.

[128] C. Carathéodory (1909). “Untersuchungen über die Grund[145] Holton, J.R. (2004). An Introduction of Dynamic Meteorollagen der Thermodynamik”. Mathematische Annalen 67: ogy, fourth edition, Elsevier, Amsterdam, ISBN 978-0-12363. doi:10.1007/bf01450409. Axiom II: In jeder be354015-7. liebigen Umgebung eines willkürlich vorgeschriebenen Anfangszustandes gibt es Zustände, die durch adiabatische Zu- [146] Mak, M. (2011). Atmospheric Dynamics, Cambridge Unistandsänderungen nicht beliebig approximiert werden könversity Press, Cambridge UK, ISBN 978-0-521-19573-7. nen. [129] Duhem, P. (1911). Traité d'Energetique, Gautier-Villars, Paris. [130] Callen, H.B. (1960/1985). [131] Truesdell, C., Bharatha, S. (1977). The Concepts and Logic of Classical Thermodynamics as a Theory of Heat Engines, Rigorously Constructed upon the Foundation Laid by S. Carnot and F. Reech, Springer, New York, ISBN 0-38707971-8.

1.1.17

Cited bibliography

• Adkins, C.J. (1968/1975). Equilibrium Thermodynamics, second edition, McGraw-Hill, London, ISBN 0-07-084057-1. • Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 0-88318-797-3.

1.1. CLASSICAL THERMODYNAMICS • Born, M. (1949). Natural Philosophy of Cause and Chance, Oxford University Press, London. • Bryan, G.H. (1907). Thermodynamics. An Introductory Treatise dealing mainly with First Principles and their Direct Applications, B.G. Teubner, Leipzig. • Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, ISBN 0-471-862568. • Eu, B.C. (2002). Generalized Thermodynamics. The Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers, Dordrecht, ISBN 1-4020-0788-4.

19 • Planck, M.(1897/1903). Treatise on Thermodynamics, translated by A. Ogg, Longmans, Green & Co., London. • Planck, M. (1923/1926). Treatise on Thermodynamics, third English edition translated by A. Ogg from the seventh German edition, Longmans, Green & Co., London. • Serrin, J. (1986). New Perspectives in Thermodynamics, edited by J. Serrin, Springer, Berlin, ISBN 3-54015931-2. • Sommerfeld, A. (1952/1956). Thermodynamics and Statistical Mechanics, Academic Press, New York.

• Fowler, R., Guggenheim, E.A. (1939). Statistical Thermodynamics, Cambridge University Press, Cambridge UK.

• Tschoegl, N.W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics, Elsevier, Amsterdam, ISBN 0-444-50426-5.

• Gibbs, J.W. (1875). On the equilibrium of heterogeneous substances, Transactions of the Connecticut Academy of Arts and Sciences, 3: 108–248.

• Tisza, L. (1966). Generalized Thermodynamics, M.I.T Press, Cambridge MA.

• Grandy, W.T., Jr (2008). Entropy and the Time Evolution of Macroscopic Systems, Oxford University Press, Oxford, ISBN 978-0-19-954617-6.

• Truesdell, C.A. (1980). The Tragicomical History of Thermodynamics, 1822–1854, Springer, New York, ISBN 0-387-90403-4.

• Guggenheim, E.A. (1949/1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, 1.1.18 Further reading (1st edition 1949) 5th edition 1967, North-Holland, Amsterdam. • Goldstein, Martin, and Inge F. (1993). The Refrigerator and the Universe. Harvard University Press. ISBN • Haase, R. (1971). Survey of Fundamental Laws, 0-674-75325-9. OCLC 32826343. A nontechnical chapter 1 of Thermodynamics, pages 1–97 of volume introduction, good on historical and interpretive mat1, ed. W. Jost, of Physical Chemistry. An Advanced ters. Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081. • Kazakov, Andrei (July–August 2008). “Web Thermo Tables – an On-Line Version of the TRC Thermody• Kondepudi, D., Prigogine, I. (1998). Modern Thermonamic Tables” (PDF). Journal of Research of the Nadynamics. From Heat Engines to Dissipative Structures, tional Institutes of Standards and Technology 113 (4): John Wiley & Sons, ISBN 0-471-97393-9. 209–220. doi:10.6028/jres.113.016. • Lebon, G., Jou, D., Casas-Vázquez, J. (2008). Understanding Non-equilibrium Thermodynamics, Springer, The following titles are more technical: Berlin, ISBN 978-3-540-74251-7. • Marsland, R. III, Brown, H.R., Valente, G. (2015). Time and irreversibility in axiomatic thermodynamics, Am. J. Phys., 83(7): 628–634. • Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green and Co., London. • Pippard, A.B. (1957). The Elements of Classical Thermodynamics, Cambridge University Press.

• Cengel, Yunus A., & Boles, Michael A. (2002). Thermodynamics – an Engineering Approach. McGraw Hill. ISBN 0-07-238332-1. OCLC 45791449. • Fermi, E. (1956). York.

Thermodynamics, Dover, New

• Kittel, Charles & Kroemer, Herbert (1980). Thermal Physics. W. H. Freeman Company. ISBN 0-71671088-9. OCLC 32932988.

20

1.1.19

CHAPTER 1. CHAPTER 1. INTRODUCTION

External links

1.2.1

Principles: mechanics and ensembles

• Thermodynamics Data & Property Calculation Web- Main articles: Mechanics and Statistical ensemble sites In physics there are two types of mechanics usually exam• Thermodynamics OpenCourseWare from the ined: classical mechanics and quantum mechanics. For University of Notre Dame Archived March 4, 2011, both types of mechanics, the standard mathematical apat the Wayback Machine. proach is to consider two ingredients: • Thermodynamics at ScienceWorld • Biochemistry Thermodynamics • Engineering Thermodynamics – A Graphical Approach

1.2 Statistical Thermodynamics Statistical mechanics is a branch of theoretical physics that studies, using probability theory, the average behaviour of a mechanical system made up of a large number of equivalent components where the microscopic realization of the system is uncertain or undefined.[1][2][3][note 1] A common use of statistical mechanics is in explaining the thermodynamic behaviour of large systems. This branch of statistical mechanics which treats and extends classical thermodynamics is known as statistical thermodynamics or equilibrium statistical mechanics. Microscopic mechanical laws do not contain concepts such as temperature, heat, or entropy; however, statistical mechanics shows how these concepts arise from the natural uncertainty about the state of a system when that system is prepared in practice. The benefit of using statistical mechanics is that it provides exact methods to connect thermodynamic quantities (such as heat capacity) to microscopic behaviour, whereas in classical thermodynamics the only available option would be to just measure and tabulate such quantities for various materials. Statistical mechanics also makes it possible to extend the laws of thermodynamics to cases which are not considered in classical thermodynamics, such as microscopic systems and other mechanical systems with few degrees of freedom.[1]

1. The complete state of the mechanical system at a given time, mathematically encoded as a phase point (classical mechanics) or a pure quantum state vector (quantum mechanics). 2. An equation of motion which carries the state forward in time: Hamilton’s equations (classical mechanics) or the time-dependent Schrödinger equation (quantum mechanics) Using these two ingredients, the state at any other time, past or future, can in principle be calculated. There is however a disconnection between these laws and everyday life experiences, as we do not find it necessary (nor even theoretically possible) to know exactly at a microscopic level the simultaneous positions and velocities of each molecule while carrying out processes at the human scale (for example, when performing a chemical reaction). Statistical mechanics fills this disconnection between the laws of mechanics and the practical experience of incomplete knowledge, by adding some uncertainty about which state the system is in. Whereas ordinary mechanics only considers the behaviour of a single state, statistical mechanics introduces the statistical ensemble, which is a large collection of virtual, independent copies of the system in various states. The statistical ensemble is a probability distribution over all possible states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points (as opposed to a single phase point in ordinary mechanics), usually represented as a distribution in a phase space with canonical coordinates. In quantum statistical mechanics, the ensemble is a probability distribution over pure states,[note 2] and can be compactly summarized as a density matrix.

Statistical mechanics also finds use outside equilibrium. An As is usual for probabilities, the ensemble can be interpreted important subbranch known as non-equilibrium statisti- in different ways:[1] cal mechanics deals with the issue of microscopically modelling the speed of irreversible processes that are driven by • an ensemble can be taken to represent the various posimbalances. Examples of such processes include chemical sible states that a single system could be in (epistemic reactions or flows of particles and heat. Unlike with equiprobability, a form of knowledge), or librium, there is no exact formalism that applies to non• the members of the ensemble can be understood as equilibrium statistical mechanics in general, and so this the states of the systems in experiments repeated on branch of statistical mechanics remains an active area of independent systems which have been prepared in a theoretical research.

1.2. STATISTICAL THERMODYNAMICS

21

similar but imperfectly controlled manner (empirical different equilibrium ensembles that can be considered, and probability), in the limit of an infinite number of trials. only some of them correspond to thermodynamics.[1] Additional postulates are necessary to motivate why the ensemThese two meanings are equivalent for many purposes, and ble for a given system should have one form or another. will be used interchangeably in this article. A common approach found in many textbooks is to take the However the probability is interpreted, each state in the en- equal a priori probability postulate.[2] This postulate states semble evolves over time according to the equation of mo- that tion. Thus, the ensemble itself (the probability distribution over states) also evolves, as the virtual systems in the enFor an isolated system with an exactly known semble continually leave one state and enter another. The energy and exactly known composition, the sysensemble evolution is given by the Liouville equation (clastem can be found with equal probability in any sical mechanics) or the von Neumann equation (quantum microstate consistent with that knowledge. mechanics). These equations are simply derived by the application of the mechanical equation of motion separately The equal a priori probability postulate therefore provides a to each virtual system contained in the ensemble, with the motivation for the microcanonical ensemble described beprobability of the virtual system being conserved over time low. There are various arguments in favour of the equal a as it evolves from state to state. priori probability postulate: One special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium. Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state.[note 3] The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics. Non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems.

1.2.2

Statistical thermodynamics

• Ergodic hypothesis: An ergodic state is one that evolves over time to explore “all accessible” states: all those with the same energy and composition. In an ergodic system, the microcanonical ensemble is the only possible equilibrium ensemble with fixed energy. This approach has limited applicability, since most systems are not ergodic. • Principle of indifference: In the absence of any further information, we can only assign equal probabilities to each compatible situation. • Maximum information entropy: A more elaborate version of the principle of indifference states that the correct ensemble is the ensemble that is compatible with the known information and that has the largest Gibbs entropy (information entropy).[4]

The primary goal of statistical thermodynamics (also known as equilibrium statistical mechanics) is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles and the interactions between them. In other words, statistical thermodynamics Other fundamental postulates for statistical mechanics have [5] provides a connection between the macroscopic properties also been proposed. of materials in thermodynamic equilibrium, and the microscopic behaviours and motions occurring inside the mateThree thermodynamic ensembles rial. Whereas statistical mechanics proper involves dynamics, here the attention is focussed on statistical equilibrium (steady state). Statistical equilibrium does not mean that the particles have stopped moving (mechanical equilibrium), rather, only that the ensemble is not evolving.

Main articles: Microcanonical ensemble, Canonical ensemble and Grand canonical ensemble

There are three equilibrium ensembles with a simple form that can be defined for any isolated system bounded inside a finite volume.[1] These are the most often discussed ensembles in statistical thermodynamics. In the macroscopic Fundamental postulate limit (defined below) they all correspond to classical therA sufficient (but not necessary) condition for statistical modynamics. equilibrium with an isolated system is that the probability distribution is a function only of conserved properties (to- Microcanonical ensemble describes a system with a precisely given energy and fixed composition (precise tal energy, total particle numbers, etc.).[1] There are many

22

CHAPTER 1. CHAPTER 1. INTRODUCTION

number of particles). The microcanonical ensemble a simple task, however, since it involves considering every contains with equal probability each possible state that possible state of the system. While some hypothetical sysis consistent with that energy and composition. tems have been exactly solved, the most general (and realistic) case is too complex for exact solution. Various apCanonical ensemble describes a system of fixed compo- proaches exist to approximate the true ensemble and allow sition that is in thermal equilibrium[note 4] with a heat calculation of average quantities. bath of a precise temperature. The canonical ensemble contains states of varying energy but identical composition; the different states in the ensemble are ac- Exact There are some cases which allow exact solutions. corded different probabilities depending on their total energy. • For very small microscopic systems, the ensembles Grand canonical ensemble describes a system with nonfixed composition (uncertain particle numbers) that is in thermal and chemical equilibrium with a thermodynamic reservoir. The reservoir has a precise temperature, and precise chemical potentials for various types of particle. The grand canonical ensemble contains states of varying energy and varying numbers of particles; the different states in the ensemble are accorded different probabilities depending on their total energy and total particle numbers. For systems containing many particles (the thermodynamic limit), all three of the ensembles listed above tend to give identical behaviour. It is then simply a matter of mathematical convenience which ensemble is used.[6] Important cases where the thermodynamic ensembles do not give identical results include:

can be directly computed by simply enumerating over all possible states of the system (using exact diagonalization in quantum mechanics, or integral over all phase space in classical mechanics). • Some large systems consist of many separable microscopic systems, and each of the subsystems can be analysed independently. Notably, idealized gases of non-interacting particles have this property, allowing exact derivations of Maxwell–Boltzmann statistics, Fermi–Dirac statistics, and Bose–Einstein statistics.[2] • A few large systems with interaction have been solved. By the use of subtle mathematical techniques, exact solutions have been found for a few toy models.[7] Some examples include the Bethe ansatz, squarelattice Ising model in zero field, hard hexagon model. Monte Carlo Main article: Monte Carlo method

• Microscopic systems. • Large systems at a phase transition. • Large systems with long-range interactions. In these cases the correct thermodynamic ensemble must be chosen as there are observable differences between these ensembles not just in the size of fluctuations, but also in average quantities such as the distribution of particles. The correct ensemble is that which corresponds to the way the system has been prepared and characterized—in other words, the ensemble that reflects the knowledge about that system.[2]

One approximate approach that is particularly well suited to computers is the Monte Carlo method, which examines just a few of the possible states of the system, with the states chosen randomly (with a fair weight). As long as these states form a representative sample of the whole set of states of the system, the approximate characteristic function is obtained. As more and more random samples are included, the errors are reduced to an arbitrarily low level. • The Metropolis–Hastings algorithm is a classic Monte Carlo method which was initially used to sample the canonical ensemble. • Path integral Monte Carlo, also used to sample the canonical ensemble.

Calculation methods Other Once the characteristic state function for an ensemble has been calculated for a given system, that system is 'solved' (macroscopic observables can be extracted from the characteristic state function). Calculating the characteristic state function of a thermodynamic ensemble is not necessarily

• For rarefied non-ideal gases, approaches such as the cluster expansion use perturbation theory to include the effect of weak interactions, leading to a virial expansion.[3]

1.2. STATISTICAL THERMODYNAMICS

23

• For dense fluids, another approximate approach is equations are fully reversible and do not destroy informabased on reduced distribution functions, in particular tion (the ensemble’s Gibbs entropy is preserved). In order the radial distribution function.[3] to make headway in modelling irreversible processes, it is necessary to add additional ingredients besides probability • Molecular dynamics computer simulations can be used and reversible mechanics. to calculate microcanonical ensemble averages, in ergodic systems. With the inclusion of a connection to Non-equilibrium mechanics is therefore an active area of a stochastic heat bath, they can also model canonical theoretical research as the range of validity of these additional assumptions continues to be explored. A few apand grand canonical conditions. proaches are described in the following subsections. • Mixed methods involving non-equilibrium statistical mechanical results (see below) may be useful. Stochastic methods

1.2.3

Non-equilibrium statistical mechanics One approach to non-equilibrium statistical mechanics is to

See also: Non-equilibrium thermodynamics There are many physical phenomena of interest that involve quasi-thermodynamic processes out of equilibrium, for example: • heat transport by the internal motions in a material, driven by a temperature imbalance, • electric currents carried by the motion of charges in a conductor, driven by a voltage imbalance, • spontaneous chemical reactions driven by a decrease in free energy, • friction, dissipation, quantum decoherence, • systems being pumped by external forces (optical pumping, etc.), • and irreversible processes in general. All of these processes occur over time with characteristic rates, and these rates are of importance for engineering. The field of non-equilibrium statistical mechanics is concerned with understanding these non-equilibrium processes at the microscopic level. (Statistical thermodynamics can only be used to calculate the final result, after the external imbalances have been removed and the ensemble has settled back down to equilibrium.) In principle, non-equilibrium statistical mechanics could be mathematically exact: ensembles for an isolated system evolve over time according to deterministic equations such as Liouville’s equation or its quantum equivalent, the von Neumann equation. These equations are the result of applying the mechanical equations of motion independently to each state in the ensemble. Unfortunately, these ensemble evolution equations inherit much of the complexity of the underlying mechanical motion, and so exact solutions are very difficult to obtain. Moreover, the ensemble evolution

incorporate stochastic (random) behaviour into the system. Stochastic behaviour destroys information contained in the ensemble. While this is technically inaccurate (aside from hypothetical situations involving black holes, a system cannot in itself cause loss of information), the randomness is added to reflect that information of interest becomes converted over time into subtle correlations within the system, or to correlations between the system and environment. These correlations appear as chaotic or pseudorandom influences on the variables of interest. By replacing these correlations with randomness proper, the calculations can be made much easier. • Boltzmann transport equation: An early form of stochastic mechanics appeared even before the term “statistical mechanics” had been coined, in studies of kinetic theory. James Clerk Maxwell had demonstrated that molecular collisions would lead to apparently chaotic motion inside a gas. Ludwig Boltzmann subsequently showed that, by taking this molecular chaos for granted as a complete randomization, the motions of particles in a gas would follow a simple Boltzmann transport equation that would rapidly restore a gas to an equilibrium state (see H-theorem). The Boltzmann transport equation and related approaches are important tools in non-equilibrium statistical mechanics due to their extreme simplicity. These approximations work well in systems where the “interesting” information is immediately (after just one collision) scrambled up into subtle correlations, which essentially restricts them to rarefied gases. The Boltzmann transport equation has been found to be very useful in simulations of electron transport in lightly doped semiconductors (in transistors), where the electrons are indeed analogous to a rarefied gas. A quantum technique related in theme is the random phase approximation. • BBGKY hierarchy: In liquids and dense gases, it is not valid to immediately discard the correlations be-

24

CHAPTER 1. CHAPTER 1. INTRODUCTION tween particles after one collision. The BBGKY hierarchy (Bogoliubov–Born–Green–Kirkwood–Yvon hierarchy) gives a method for deriving Boltzmann-type equations but also extending them beyond the dilute gas case, to include correlations after a few collisions.

• Keldysh formalism (a.k.a. NEGF—non-equilibrium Green functions): A quantum approach to including stochastic dynamics is found in the Keldysh formalism. This approach often used in electronic quantum transport calculations. Near-equilibrium methods

an electronic system is the use of the Green-Kubo relations, with the inclusion of stochastic dephasing by interactions between various electrons by use of the Keldysh method.[8][9]

1.2.4

Applications outside thermodynamics

The ensemble formalism also can be used to analyze general mechanical systems with uncertainty in knowledge about the state of a system. Ensembles are also used in: • propagation of uncertainty over time,[1]

• regression analysis of gravitational orbits, Another important class of non-equilibrium statistical mechanical models deals with systems that are only very • ensemble forecasting of weather, slightly perturbed from equilibrium. With very small perturbations, the response can be analysed in linear re• dynamics of neural networks, sponse theory. A remarkable result, as formalized by the fluctuation-dissipation theorem, is that the response of a • bounded-rational potential games in game theory and system when near equilibrium is precisely related to the economics. fluctuations that occur when the system is in total equilibrium. Essentially, a system that is slightly away from equilibrium—whether put there by external forces or by 1.2.5 History fluctuations—relaxes towards equilibrium in the same way, since the system cannot tell the difference or “know” how In 1738, Swiss physicist and mathematician Daniel it came to be away from equilibrium.[3]:664 Bernoulli published Hydrodynamica which laid the basis for the kinetic theory of gases. In this work, Bernoulli posited This provides an indirect avenue for obtaining numbers the argument, still used to this day, that gases consist of such as ohmic conductivity and thermal conductivity by great numbers of molecules moving in all directions, that extracting results from equilibrium statistical mechanics. their impact on a surface causes the gas pressure that we Since equilibrium statistical mechanics is mathematically feel, and that what we experience as heat is simply the kiwell defined and (in some cases) more amenable for calcunetic energy of their motion.[5] lations, the fluctuation-dissipation connection can be a convenient shortcut for calculations in near-equilibrium statis- In 1859, after reading a paper on the diffusion of molecules tical mechanics. by Rudolf Clausius, Scottish physicist James Clerk Maxwell A few of the theoretical tools used to make this connection formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a include: certain velocity in a specific range. This was the firstever statistical law in physics.[10] Five years later, in 1864, • Fluctuation–dissipation theorem Ludwig Boltzmann, a young student in Vienna, came across • Onsager reciprocal relations Maxwell’s paper and spent much of his life developing the subject further. • Green–Kubo relations Statistical mechanics proper was initiated in the 1870s with • Landauer–Büttiker formalism the work of Boltzmann, much of which was collectively published in his 1896 Lectures on Gas Theory.[11] Boltz• Mori–Zwanzig formalism mann’s original papers on the statistical interpretation of thermodynamics, the H-theorem, transport theory, thermal equilibrium, the equation of state of gases, and similar subHybrid methods jects, occupy about 2,000 pages in the proceedings of the An advanced approach uses a combination of stochastic Vienna Academy and other societies. Boltzmann intromethods and linear response theory. As an example, one duced the concept of an equilibrium statistical ensemble approach to compute quantum coherence effects (weak lo- and also investigated for the first time non-equilibrium stacalization, conductance fluctuations) in the conductance of tistical mechanics, with his H-theorem.

1.2. STATISTICAL THERMODYNAMICS The term “statistical mechanics” was coined by the American mathematical physicist J. Willard Gibbs in 1884.[12][note 5] “Probabilistic mechanics” might today seem a more appropriate term, but “statistical mechanics” is firmly entrenched.[13] Shortly before his death, Gibbs published in 1902 Elementary Principles in Statistical Mechanics, a book which formalized statistical mechanics as a fully general approach to address all mechanical systems— macroscopic or microscopic, gaseous or non-gaseous.[1] Gibbs’ methods were initially derived in the framework classical mechanics, however they were of such generality that they were found to adapt easily to the later quantum mechanics, and still form the foundation of statistical mechanics to this day.[2]

1.2.6

See also

• Thermodynamics: non-equilibrium, chemical • Mechanics: classical, quantum • Probability, statistical ensemble • Numerical methods: Monte Carlo method, molecular dynamics • Statistical physics • Quantum statistical mechanics • List of notable textbooks in statistical mechanics • List of important publications in statistical mechanics

25

[4] The transitive thermal equilibrium (as in, “X is thermal equilibrium with Y”) used here means that the ensemble for the first system is not perturbed when the system is allowed to weakly interact with the second system. [5] According to Gibbs, the term “statistical”, in the context of mechanics, i.e. statistical mechanics, was first used by the Scottish physicist James Clerk Maxwell in 1871. From: J. Clerk Maxwell, Theory of Heat (London, England: Longmans, Green, and Co., 1871), p. 309: “In dealing with masses of matter, while we do not perceive the individual molecules, we are compelled to adopt what I have described as the statistical method of calculation, and to abandon the strict dynamical method, in which we follow every motion by the calculus.”

1.2.8

References

[1] Gibbs, Josiah Willard (1902). Elementary Principles in Statistical Mechanics. New York: Charles Scribner’s Sons. [2] Tolman, R. C. (1938). The Principles of Statistical Mechanics. Dover Publications. ISBN 9780486638966. [3] Balescu, Radu (1975). Equilibrium and Non-Equilibrium Statistical Mechanics. John Wiley & Sons. ISBN 9780471046004. [4] Jaynes, E. (1957). “Information Theory and Statistical Mechanics”. Physical Review 106 (4): 620. doi:10.1103/PhysRev.106.620. [5] J. Uffink, "Compendium of the foundations of classical statistical physics." (2006)

Fundamentals of Statistical Mechanics – Wikipedia book

[6] Reif, F. (1965). Fundamentals of Statistical and Thermal Physics. McGraw–Hill. p. 227. ISBN 9780070518001.

1.2.7

[7] Baxter, Rodney J. (1982). Exactly solved models in statistical mechanics. Academic Press Inc. ISBN 9780120831807.

Notes

[1] The term statistical mechanics is sometimes used to refer to only statistical thermodynamics. This article takes the broader view. By some definitions, statistical physics is an even broader term which statistically studies any type of physical system, but is often taken to be synonymous with statistical mechanics. [2] The probabilities in quantum statistical mechanics should not be confused with quantum superposition. While a quantum ensemble can contain states with quantum superpositions, a single quantum state cannot be used to represent an ensemble. [3] Statistical equilibrium should not be confused with mechanical equilibrium. The latter occurs when a mechanical system has completely ceased to evolve even on a microscopic scale, due to being in a state with a perfect balancing of forces. Statistical equilibrium generally involves states that are very far from mechanical equilibrium.

[8] Altshuler, B. L.; Aronov, A. G.; Khmelnitsky, D. E. (1982). “Effects of electron-electron collisions with small energy transfers on quantum localisation”. Journal of Physics C: Solid State Physics 15 (36): 7367. doi:10.1088/00223719/15/36/018. [9] Aleiner, I.; Blanter, Y. (2002). “Inelastic scattering time for conductance fluctuations”. Physical Review B 65 (11). doi:10.1103/PhysRevB.65.115317. [10] Mahon, Basil (2003). The Man Who Changed Everything – the Life of James Clerk Maxwell. Hoboken, NJ: Wiley. ISBN 0-470-86171-1. OCLC 52358254. [11] Ebeling, Werner; Sokolov, Igor M. (2005). Statistical Thermodynamics and Stochastic Theory of Nonequilibrium Systems. World Scientific Publishing Co. Pte. Ltd. pp. 3–12. ISBN 978-90-277-1674-3. (section 1.2)

26

CHAPTER 1. CHAPTER 1. INTRODUCTION

[12] J. W. Gibbs, “On the Fundamental Formula of Statistical Mechanics, with Applications to Astronomy and Thermodynamics.” Proceedings of the American Association for the Advancement of Science, 33, 57-58 (1884). Reproduced in The Scientific Papers of J. Willard Gibbs, Vol II (1906), pp. 16.

1.3.1

History

[13] Mayants, Lazar (1984). The enigma of probability and physics. Springer. p. 174. ISBN 978-90-277-1674-3.

1.2.9

External links

• Philosophy of Statistical Mechanics article by Lawrence Sklar for the Stanford Encyclopedia of Philosophy. • Sklogwiki - Thermodynamics, statistical mechanics, and the computer simulation of materials. SklogWiki is particularly orientated towards liquids and soft condensed matter. • Statistical Thermodynamics - Historical Timeline • Thermodynamics and Statistical Mechanics by Richard Fitzpatrick • Lecture Notes in Statistical Mechanics and MesoscopJ. Willard Gibbs - founder of chemical thermodynamics ics by Doron Cohen • Videos of lecture series in statistical mechanics on In 1865, the German physicist Rudolf Clausius, in his YouTube taught by Leonard Susskind. Mechanical Theory of Heat, suggested that the prin• Vu-Quoc, L., Configuration integral (statistical me- ciples of thermochemistry, e.g. the heat evolved in could be applied to the principles of chanics), 2008. this wiki site is down; see this article combustion reactions, [2] thermodynamics. Building on the work of Clausius, bein the web archive on 2012 April 28. tween the years 1873-76 the American mathematical physicist Willard Gibbs published a series of three papers, the most famous one being the paper On the Equilibrium of 1.3 Chemical Thermodynamics Heterogeneous Substances. In these papers, Gibbs showed how the first two laws of thermodynamics could be meaChemical thermodynamics is the study of the interrela- sured graphically and mathematically to determine both the tion of heat and work with chemical reactions or with phys- thermodynamic equilibrium of chemical reactions as well ical changes of state within the confines of the laws of ther- as their tendencies to occur or proceed. Gibbs’ collection modynamics. Chemical thermodynamics involves not only of papers provided the first unified body of thermodynamic laboratory measurements of various thermodynamic prop- theorems from the principles developed by others, such as erties, but also the application of mathematical methods to Clausius and Sadi Carnot. the study of chemical questions and the spontaneity of pro- During the early 20th century, two major publications cesses. successfully applied the principles developed by Gibbs to The structure of chemical thermodynamics is based on the first two laws of thermodynamics. Starting from the first and second laws of thermodynamics, four equations called the “fundamental equations of Gibbs” can be derived. From these four, a multitude of equations, relating the thermodynamic properties of the thermodynamic system can be derived using relatively simple mathematics. This outlines the mathematical framework of chemical thermodynamics.[1]

chemical processes, and thus established the foundation of the science of chemical thermodynamics. The first was the 1923 textbook Thermodynamics and the Free Energy of Chemical Substances by Gilbert N. Lewis and Merle Randall. This book was responsible for supplanting the chemical affinity for the term free energy in the Englishspeaking world. The second was the 1933 book Modern Thermodynamics by the methods of Willard Gibbs written

1.3. CHEMICAL THERMODYNAMICS by E. A. Guggenheim. In this manner, Lewis, Randall, and Guggenheim are considered as the founders of modern chemical thermodynamics because of the major contribution of these two books in unifying the application of thermodynamics to chemistry.[1]

1.3.2

Overview

The primary objective of chemical thermodynamics is the establishment of a criterion for the determination of the feasibility or spontaneity of a given transformation.[3] In this manner, chemical thermodynamics is typically used to predict the energy exchanges that occur in the following processes: 1. Chemical reactions 2. Phase changes 3. The formation of solutions

27 of chemical bonds involves energy or heat, which may be either absorbed or evolved from a chemical system. Energy that can be released (or absorbed) because of a reaction between a set of chemical substances is equal to the difference between the energy content of the products and the reactants. This change in energy is called the change in internal energy of a chemical reaction. Where ∆Uf◦ reactants is the internal energy of formation of the reactant molecules that can be calculated from the bond energies of the various chemical bonds of the molecules under consideration and ∆Uf◦ products is the internal energy of formation of the product molecules. The change in internal energy is a process which is equal to the heat change if it is measured under conditions of constant volume(at STP condition), as in a closed rigid container such as a bomb calorimeter. However, under conditions of constant pressure, as in reactions in vessels open to the atmosphere, the measured heat change is not always equal to the internal energy change, because pressure-volume work also releases or absorbs energy. (The heat change at constant pressure is called the enthalpy change; in this case the enthalpy of formation).

The following state functions are of primary concern in Another useful term is the heat of combustion, which is the chemical thermodynamics: energy released due to a combustion reaction and often applied in the study of fuels. Food is similar to hydrocar• Internal energy (U) bon fuel and carbohydrate fuels, and when it is oxidized, its caloric content is similar (though not assessed in the same • Enthalpy (H) way as a hydrocarbon fuel — see food energy). • Entropy (S) In chemical thermodynamics the term used for the chemical potential energy is chemical potential, and for chemical • Gibbs free energy (G) transformation an equation most often used is the GibbsDuhem equation. Most identities in chemical thermodynamics arise from application of the first and second laws of thermodynamics, particularly the law of conservation of energy, to these state functions. The 3 laws of thermodynamics: 1. The energy of the universe is constant.

1.3.4

Chemical reactions

Main article: Chemical reaction

2. In any spontaneous process, there is always an increase In most cases of interest in chemical thermodynamics there in entropy of the universe are internal degrees of freedom and processes, such as 3. The entropy of a perfect crystal(well ordered) at 0 chemical reactions and phase transitions, which always creKelvin is zero ate entropy unless they are at equilibrium, or are maintained at a “running equilibrium” through “quasi-static” changes by being coupled to constraining devices, such as pistons 1.3.3 Chemical energy or electrodes, to deliver and receive external work. Even for homogeneous “bulk” materials, the free energy funcMain article: Chemical energy tions depend on the composition, as do all the extensive thermodynamic potentials, including the internal energy. If Chemical energy is the potential of a chemical substance to the quantities { Ni }, the number of chemical species, are undergo a transformation through a chemical reaction or to omitted from the formulae, it is impossible to describe comtransform other chemical substances. Breaking or making positional changes.

28

CHAPTER 1. CHAPTER 1. INTRODUCTION

Gibbs function or Gibbs Energy

ξ for the extent of reaction (Prigogine & Defay, p. 18; Prigogine, pp. 4–7; Guggenheim, p. 37.62), and to the use of the partial derivative ∂G/∂ξ (in place of the widely used "ΔG", since the quantity at issue is not a finite change). The result is an understandable expression for the dependence of dG on chemical reactions (or other processes). If there is just one reaction

For a “bulk” (unstructured) system they are the last remaining extensive variables. For an unstructured, homogeneous “bulk” system, there are still various extensive compositional variables { Ni } that G depends on, which specify the composition, the amounts of each chemical substance, expressed as the numbers of molecules present or (dividing by Avogadro’s number = 6.023× 1023 ), the numbers of ( ) ∂G moles dξ. (dG)T,P = ∂ξ T,P

If we introduce the stoichiometric coefficient for the i-th component in the reaction

G = G(T, P, {Ni }) . For the case where only PV work is possible

dG = −SdT + V dP +



νi = ∂Ni /∂ξ µi dNi

which tells how many molecules of i are produced or consumed, we obtain an algebraic expression for the partial in which μi is the chemical potential for the i-th component derivative in the system ( ) ∑ ∂G µi νi = −A = ( ) ∂ξ T,P ∂G i µi = . ∂Ni T,P,Nj̸=i ,etc. where, (De Donder; Progoine & Defay, p. 69; Guggenheim, pp. 37,240), we introduce a concise and historical The expression for dG is especially useful at constant T and name for this quantity, the "affinity", symbolized by A, as P, conditions which are easy to achieve experimentally and introduced by Théophile de Donder in 1923. The minus which approximates the condition in living creatures sign comes from the fact the affinity was defined to represent the rule that spontaneous changes will ensue only when the ∑ change in the Gibbs free energy of the process is negative, µi dNi . (dG)T,P = meaning that the chemical species have a positive affinity i for each other. The differential for G takes on a simple form which displays its dependence on compositional change Chemical affinity i

Main article: Chemical affinity While this formulation is mathematically defensible, it is not particularly transparent since one does not simply add or remove molecules from a system. There is always a process involved in changing the composition; e.g., a chemical reaction (or many), or movement of molecules from one phase (liquid) to another (gas or solid). We should find a notation which does not seem to imply that the amounts of the components ( Ni } can be changed independently. All real processes obey conservation of mass, and in addition, conservation of the numbers of atoms of each kind. Whatever molecules are transferred to or from should be considered part of the “system”.

(dG)T,P = −A dξ . If there are a number of chemical reactions going on simultaneously, as is usually the case (dG)T,P = −



Ak dξk .

k

a set of reaction coordinates { ξj }, avoiding the notion that the amounts of the components ( Ni } can be changed independently. The expressions above are equal to zero at thermodynamic equilibrium, while in the general case for real systems, they are negative because all chemical reactions proceeding at a finite rate produce entropy. This can be made even more explicit by introducing the reaction rates Consequently, we introduce an explicit variable to represent dξj/dt. For each and every physically independent process the degree of advancement of a process, a progress variable (Prigogine & Defay, p. 38; Prigogine, p. 24)

1.3. CHEMICAL THERMODYNAMICS

A ξ˙ ≤ 0 . This is a remarkable result since the chemical potentials are intensive system variables, depending only on the local molecular milieu. They cannot “know” whether the temperature and pressure (or any other system variables) are going to be held constant over time. It is a purely local criterion and must hold regardless of any such constraints. Of course, it could have been obtained by taking partial derivatives of any of the other fundamental state functions, but nonetheless is a general criterion for (−T times) the entropy production from that spontaneous process; or at least any part of it that is not captured as external work. (See Constraints below.) We now relax the requirement of a homogeneous “bulk” system by letting the chemical potentials and the affinity apply to any locality in which a chemical reaction (or any other process) is occurring. By accounting for the entropy production due to irreversible processes, the inequality for dG is now replaced by an equality

dG = −SdT + V dP −



Ak dξk + W ′

k

or

dGT,P = −



Ak dξk + W ′ .

k

Any decrease in the Gibbs function of a system is the upper limit for any isothermal, isobaric work that can be captured in the surroundings, or it may simply be dissipated, appearing as T times a corresponding increase in the entropy of the system and/or its surrounding. Or it may go partly toward doing external work and partly toward creating entropy. The important point is that the extent of reaction for a chemical reaction may be coupled to the displacement of some external mechanical or electrical quantity in such a way that one can advance only if the other one also does. The coupling may occasionally be rigid, but it is often flexible and variable.

29 assertion that all spontaneous reactions have a negative ΔG is merely a restatement of the fundamental thermodynamic relation, giving it the physical dimensions of energy and somewhat obscuring its significance in terms of entropy. When there is no useful work being done, it would be less misleading to use the Legendre transforms of the entropy appropriate for constant T, or for constant T and P, the Massieu functions −F/T and −G/T respectively.

1.3.5

Non equilibrium

Main article: non-equilibrium thermodynamics Generally the systems treated with the conventional chemical thermodynamics are either at equilibrium or near equilibrium. Ilya Prigogine developed the thermodynamic treatment of open systems that are far from equilibrium. In doing so he has discovered phenomena and structures of completely new and completely unexpected types. His generalized, nonlinear and irreversible thermodynamics has found surprising applications in a wide variety of fields. The non equilibrium thermodynamics has been applied for explaining how ordered structures e.g. the biological systems, can develop from disorder. Even if Onsager’s relations are utilized, the classical principles of equilibrium in thermodynamics still show that linear systems close to equilibrium always develop into states of disorder which are stable to perturbations and cannot explain the occurrence of ordered structures. Prigogine called these systems dissipative systems, because they are formed and maintained by the dissipative processes which take place because of the exchange of energy between the system and its environment and because they disappear if that exchange ceases. They may be said to live in symbiosis with their environment. The method which Prigogine used to study the stability of the dissipative structures to perturbations is of very great general interest. It makes it possible to study the most varied problems, such as city traffic problems, the stability of insect communities, the development of ordered biological structures and the growth of cancer cells to mention but a few examples.

Solutions

System constraints

In solution chemistry and biochemistry, the Gibbs free energy decrease (∂G/∂ξ, in molar units, denoted cryptically by ΔG) is commonly used as a surrogate for (−T times) the entropy produced by spontaneous chemical reactions in situations where there is no work being done; or at least no “useful” work; i.e., other than perhaps some ± PdV. The

In this regard, it is crucial to understand the role of walls and other constraints, and the distinction between independent processes and coupling. Contrary to the clear implications of many reference sources, the previous analysis is not restricted to homogeneous, isotropic bulk systems which can deliver only PdV work to the outside world, but ap-

30

CHAPTER 1. CHAPTER 1. INTRODUCTION

plies even to the most structured systems. There are complex systems with many chemical “reactions” going on at the same time, some of which are really only parts of the same, overall process. An independent process is one that could proceed even if all others were unaccountably stopped in their tracks. Understanding this is perhaps a “thought experiment” in chemical kinetics, but actual examples exist.

its Applications to the Steam Engine and to Physical Properties of Bodies. London: John van Voorst, 1 Paternoster Row. MDCCCLXVII. [3] Klotz, I. (1950). Chemical Thermodynamics. New York: Prentice-Hall, Inc.

A gas reaction which results in an increase in the number 1.3.8 Further reading of molecules will lead to an increase in volume at constant • Herbert B. Callen (1960). Thermodynamics. Wiley external pressure. If it occurs inside a cylinder closed with & Sons. The clearest account of the logical foundaa piston, the equilibrated reaction can proceed only by dotions of the subject. ISBN 0-471-13035-4. Library of ing work against an external force on the piston. The exCongress Catalog No. 60-5597 tent variable for the reaction can increase only if the piston moves, and conversely, if the piston is pushed inward, the • Ilya Prigogine & R. Defay, translated by D.H. Evreaction is driven backwards. erett; Chapter IV (1954). Chemical Thermodynamics. Longmans, Green & Co. Exceptionally clear on the Similarly, a redox reaction might occur in an logical foundations as applied to chemistry; includes electrochemical cell with the passage of current in non-equilibrium thermodynamics. wires connecting the electrodes. The half-cell reactions at the electrodes are constrained if no current is allowed to • Ilya Prigogine (1967). Thermodynamics of Irreversible flow. The current might be dissipated as joule heating, or Processes, 3rd ed. Interscience: John Wiley & Sons. it might in turn run an electrical device like a motor doing A simple, concise monograph explaining all the basic mechanical work. An automobile lead-acid battery can ideas. Library of Congress Catalog No. 67-29540 be recharged, driving the chemical reaction backwards. In this case as well, the reaction is not an independent • E.A. Guggenheim (1967). Thermodynamics: An Adprocess. Some, perhaps most, of the Gibbs free energy of vanced Treatment for Chemists and Physicists, 5th ed. reaction may be delivered as external work. North Holland; John Wiley & Sons (Interscience). A remarkably astute treatise. Library of Congress CataThe hydrolysis of ATP to ADP and phosphate can drive log No. 67-20003 the force times distance work delivered by living muscles, and synthesis of ATP is in turn driven by a redox chain in • Th. De Donder (1922). Bull. Ac. Roy. Belg. (Cl. Sc.) mitochondria and chloroplasts, which involves the transport (5) 7: 197, 205. Missing or empty |title= (help) of ions across the membranes of these cellular organelles. The coupling of processes here, and in the previous examples, is often not complete. Gas can leak slowly past a pis- 1.3.9 External links ton, just as it can slowly leak out of a rubber balloon. Some reaction may occur in a battery even if no external current is • Chemical Thermodynamics - University of North Carflowing. There is usually a coupling coefficient, which may olina depend on relative rates, which determines what percent• Chemical energetics (Introduction to thermodynamics age of the driving free energy is turned into external work, and the First Law) or captured as “chemical work"; a misnomer for the free energy of another chemical process. • Thermodynamics of chemical equilibrium (Entropy, Second Law and free energy)

1.3.6

See also

• Thermodynamic databases for pure substances

1.4

Equilibrium Thermodynamics

Equilibrium Thermodynamics is the systematic study of transformations of matter and energy in systems in terms of a concept called thermodynamic equilibrium. The word [1] Ott, Bevan J.; Boerio-Goates, Juliana (2000). Chemical equilibrium implies a state of balance. Equilibrium therThermodynamics – Principles and Applications. Academic modynamics, in origins, derives from analysis of the Carnot Press. ISBN 0-12-530990-2. cycle. Here, typically a system, as cylinder of gas, initially [2] Clausius, R. (1865). The Mechanical Theory of Heat – with in its own state of internal thermodynamic equilibrium, is

1.3.7

References

1.5. NON-EQUILIBRIUM THERMODYNAMICS set out of balance via heat input from a combustion reaction. Then, through a series of steps, as the system settles into its final equilibrium state, work is extracted.

31 • Kondepudi, D. & Prigogine, I. (2004). Modern Thermodynamics – From Heat Engines to Dissipative Structures (textbook). New York: John Wiley & Sons.

In an equilibrium state the potentials, or driving forces, within the system, are in exact balance. A central aim in • Perrot, P. (1998). A to Z of Thermodynamics (dictioequilibrium thermodynamics is: given a system in a wellnary). New York: Oxford University Press. defined initial state of thermodynamic equilibrium, subject to accurately specified constraints, to calculate, when the constraints are changed by an externally imposed intervention, what the state of the system will be once it has reached a new equilibrium. An equilibrium state is mathematically Thermodyascertained by seeking the extrema of a thermodynamic po- 1.5 Non-equilibrium tential function, whose nature depends on the constraints namics imposed on the system. For example, a chemical reaction at constant temperature and pressure will reach equilibrium at a minimum of its components’ Gibbs free energy and a Non-equilibrium thermodynamics is a branch of maximum of their entropy. thermodynamics that deals with physical systems that are not in thermodynamic equilibrium but can be adequately Equilibrium thermodynamics differs from non-equilibrium described in terms of variables (non-equilibrium state thermodynamics, in that, with the latter, the state of the sysvariables) that represent an extrapolation of the variables tem under investigation will typically not be uniform but used to specify the system in thermodynamic equilibrium. will vary locally in those as energy, entropy, and temperNon-equilibrium thermodynamics is concerned with ature distributions as gradients are imposed by dissipative transport processes and with the rates of chemical reacthermodynamic fluxes. In equilibrium thermodynamics, by tions. It relies on what may be thought of as more or less contrast, the state of the system will be considered uniform nearness to thermodynamic equilibrium. Non-equilibrium throughout, defined macroscopically by such quantities as thermodynamics is a work in progress, not an established temperature, pressure, or volume. Systems are studied in edifice. This article will try to sketch some approaches to terms of change from one equilibrium state to another; such it and some concepts important for it. a change is called a thermodynamic process. Ruppeiner geometry is a type of information geometry used to study thermodynamics. It claims that thermodynamic systems can be represented by Riemannian geometry, and that statistical properties can be derived from the model. This geometrical model is based on the idea that there exist equilibrium states which can be represented by points on two-dimensional surface and the distance between these equilibrium states is related to the fluctuation between them.

1.4.1

See also

• Non-equilibrium thermodynamics

Almost all systems found in nature are not in thermodynamic equilibrium; for they are changing or can be triggered to change over time, and are continuously and discontinuously subject to flux of matter and energy to and from other systems and to chemical reactions. Some systems and processes are, however, in a useful sense, near enough to thermodynamic equilibrium to allow description with useful accuracy by currently known non-equilibrium thermodynamics. Nevertheless, many natural systems and processes will always remain far beyond the scope of nonequilibrium thermodynamic methods. This is because of the very small size of atoms, as compared with macroscopic systems.

The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by • Thermodynamics equilibrium thermodynamics. One fundamental difference between equilibrium thermodynamics and non-equilibrium thermodynamics lies in the behaviour of inhomogeneous 1.4.2 References systems, which require for their study knowledge of rates of • Adkins, C.J. (1983). Equilibrium Thermodynamics, reaction which are not considered in equilibrium thermodynamics of homogeneous systems. This is discussed below. 3rd Ed. Cambridge: Cambridge University Press. Another fundamental and very important difference is the • Cengel, Y. & Boles, M. (2002). Thermodynamics – difficulty or impossibility in defining entropy at an instant an Engineering Approach, 4th Ed. (textbook). New of time in macroscopic terms for systems not in thermodyYork: McGraw Hill. namic equilibrium.[1][2]

32

1.5.1

CHAPTER 1. CHAPTER 1. INTRODUCTION

Scope of non-equilibrium thermody- spond to extensive thermodynamic state variables have to be defined as spatial densities of the corresponding extennamics

Difference between equilibrium and non-equilibrium thermodynamics A profound difference separates equilibrium from nonequilibrium thermodynamics. Equilibrium thermodynamics ignores the time-courses of physical processes. In contrast, non-equilibrium thermodynamics attempts to describe their time-courses in continuous detail. Equilibrium thermodynamics restricts its considerations to processes that have initial and final states of thermodynamic equilibrium; the time-courses of processes are deliberately ignored. Consequently, equilibrium thermodynamics allows processes that pass through states far from thermodynamic equilibrium, that cannot be described even by the variables admitted for non-equilibrium thermodynamics,[3] such as time rates of change of temperature and pressure.[4] For example, in equilibrium thermodynamics, a process is allowed to include even a violent explosion that cannot be described by non-equilibrium thermodynamics.[3] Equilibrium thermodynamics does, however, for theoretical development, use the idealized concept of the “quasi-static process”. A quasi-static process is a conceptual (timeless and physically impossible) smooth mathematical passage along a continuous path of states of thermodynamic equilibrium.[5] It is an exercise in differential geometry rather than a process that could occur in actuality. Non-equilibrium thermodynamics, on the other hand, attempting to describe continuous time-courses, need its state variables to have a very close connection with those of equilibrium thermodynamics.[6] This profoundly restricts the scope of non-equilibrium thermodynamics, and places heavy demands on its conceptual framework.

Non-equilibrium state variables The suitable relationship that defines non-equilibrium thermodynamic state variables is as follows. On occasions when the system happens to be in states that are sufficiently close to thermodynamic equilibrium, non-equilibrium state variables are such that they can be measured locally with sufficient accuracy by the same techniques as are used to measure thermodynamic state variables, or by corresponding and time and space derivatives, including fluxes of matter and energy. In general, non-equilibrium thermodynamic systems are spatially and temporally non-uniform, but their non-uniformity still has a sufficient degree of smoothness to support the existence of suitable time and space derivatives of non-equilibrium state variables. Because of the spatial non-uniformity, non-equilibrium state variables that corre-

sive equilibrium state variables. On occasions when the system is sufficiently close to thermodynamic equilibrium, intensive non-equilibrium state variables, for example temperature and pressure, correspond closely with equilibrium state variables. It is necessary that measuring probes be small enough, and rapidly enough responding, to capture relevant non-uniformity. Further, the non-equilibrium state variables are required to be mathematically functionally related to one another in ways that suitably resemble corresponding relations between equilibrium thermodynamic state variables.[7] In reality, these requirements are very demanding, and it may be difficult or practically, or even theoretically, impossible to satisfy them. This is part of why non-equilibrium thermodynamics is a work in progress.

1.5.2

Overview

Non-equilibrium thermodynamics is a work in progress, not an established edifice. This article will try to sketch some approaches to it and some concepts important for it. Some concepts of particular importance for nonequilibrium thermodynamics include time rate of dissipation of energy (Rayleigh 1873,[8] Onsager 1931,[9] also[7][10] ), time rate of entropy production (Onsager 1931),[9] thermodynamic fields,[11][12][13] dissipative structure,[14] and non-linear dynamical structure.[10] One problem of interest is the thermodynamic study of nonequilibrium steady states, in which entropy production and some flows are non-zero, but there is no time variation of physical variables. One initial approach to non-equilibrium thermodynamics is sometimes called 'classical irreversible thermodynamics’.[2] There are other approaches to non-equilibrium thermodynamics, for example extended irreversible thermodynamics,[2][15] and generalized thermodynamics,[16] but they are hardly touched on in the present article.

Quasi-radiationless non-equilibrium thermodynamics of matter in laboratory conditions According to Wildt[17] (see also Essex[18][19][20] ), current versions of non-equilibrium thermodynamics ignore radiant heat; they can do so because they refer to laboratory quantities of matter under laboratory conditions with temperatures well below those of stars. At laboratory temperatures, in laboratory quantities of matter, thermal radiation is weak and can be practically nearly ignored. But, for example, atmospheric physics is concerned with large amounts of matter, occupying cubic kilometers, that, taken as a whole, are

1.5. NON-EQUILIBRIUM THERMODYNAMICS

33

not within the range of laboratory quantities; then thermal tively treated as two-dimensional surfaces, with no spatial radiation cannot be ignored. volume, and no spatial variation.

Local equilibrium thermodynamics The terms 'classical irreversible thermodynamics’[2] and 'local equilibrium thermodynamics’ are sometimes used to refer to a version of non-equilibrium thermodynamics that demands certain simplifying assumptions, as follows. The assumptions have the effect of making each very small volume element of the system effectively homogeneous, or well-mixed, or without an effective spatial structure, and without kinetic energy of bulk flow or of diffusive flux. Even within the thought-frame of classical irreversible thermodynamics, care[10] is needed in choosing the independent variables[21] for systems. In some writings, it is assumed that the intensive variables of equilibrium thermodynamics are sufficient as the independent variables for the task (such variables are considered to have no 'memory', and do not show hysteresis); in particular, local flow intensive variables are not admitted as independent variables; local flows are considered as dependent on quasi-static local intensive variables.

Local equilibrium thermodynamics with materials with “memory” A further extension of local equilibrium thermodynamics is to allow that materials may have “memory”, so that their constitutive equations depend not only on present values but also on past values of local equilibrium variables. Thus time comes into the picture more deeply than for time-dependent local equilibrium thermodynamics with memoryless materials, but fluxes are not independent variables of state.[28] Extended irreversible thermodynamics Extended irreversible thermodynamics is a branch of non-equilibrium thermodynamics that goes outside the restriction to the local equilibrium hypothesis. The space of state variables is enlarged by including the fluxes of mass, momentum and energy and eventually higher order fluxes. The formalism is well-suited for describing high-frequency processes and small-length scales materials.

Also it is assumed that the local entropy density is the same function of the other local intensive variables as in equilibrium; this is called the local thermodynamic equilibrium assumption[7][10][14][15][22][23][24][25] (see also Keizer (1987)[26] ). Radiation is ignored because it is transfer of energy between regions, which can be remote from one another. In the classical irreversible thermodynamic approach, there is allowed very small spatial variation, from very small volume element to adjacent very small volume element, but it is assumed that the global entropy of the system can be found by simple spatial integration of the local entropy density; this means that spatial structure cannot contribute as it properly should to the global entropy assessment for the system. This approach assumes spatial and temporal continuity and even differentiability of locally defined intensive variables such as temperature and internal energy density. All of these are very stringent demands. Consequently, this approach can deal with only a very limited range of phenomena. This approach is nevertheless valuable because it can deal well with some macroscopically observable phenomena.

1.5.3

Basic concepts

In other writings, local flow variables are considered; these might be considered as classical by analogy with the timeinvariant long-term time-averages of flows produced by endlessly repeated cyclic processes; examples with flows are in the thermoelectric phenomena known as the Seebeck and the Peltier effects, considered by Kelvin in the nineteenth century and by Onsager in the twentieth.[22][27] These effects occur at metal junctions, which were originally effec-

The mechanics of macroscopic systems depends on a number of extensive quantities. It should be stressed that all systems are permanently interacting with their surroundings, thereby causing unavoidable fluctuations of extensive quantities. Equilibrium conditions of thermodynamic systems are related to the maximum property of the entropy. If the only extensive quantity that is allowed to fluctuate is the internal energy, all the other ones being kept strictly constant,

There are many examples of stationary non-equilibrium systems, some very simple, like a system confined between two thermostats at different temperatures or the ordinary Couette flow, a fluid enclosed between two flat walls moving in opposite directions and defining non-equilibrium conditions at the walls. Laser action is also a non-equilibrium process, but it depends on departure from local thermodynamic equilibrium and is thus beyond the scope of classical irreversible thermodynamics; here a strong temperature difference is maintained between two molecular degrees of freedom (with molecular laser, vibrational and rotational molecular motion), the requirement for two component 'temperatures’ in the one small region of space, precluding local thermodynamic equilibrium, which demands that only one temperature be needed. Damping of acoustic perturbations or shock waves are non-stationary nonequilibrium processes. Driven complex fluids, turbulent systems and glasses are other examples of non-equilibrium systems.

34

CHAPTER 1. CHAPTER 1. INTRODUCTION

the temperature of the system is measurable and meaningful. The system’s properties are then most conveniently described using the thermodynamic potential Helmholtz free energy (A = U - TS), a Legendre transformation of the energy. If, next to fluctuations of the energy, the macroscopic dimensions (volume) of the system are left fluctuating, we use the Gibbs free energy (G = U + PV - TS), where the system’s properties are determined both by the temperature and by the pressure.

1.5.4

Stationary states, fluctuations, and stability

In thermodynamics one is often interested in a stationary state of a process, allowing that the stationary state include the occurrence of unpredictable and experimentally unreproducible fluctuations in the state of the system. The fluctuations are due to the system’s internal sub-processes and to exchange of matter or energy with the system’s surroundNon-equilibrium systems are much more complex and they ings that create the constraints that define the process. may undergo fluctuations of more extensive quantities. The If the stationary state of the process is stable, then the unreboundary conditions impose on them particular intensive producible fluctuations involve local transient decreases of variables, like temperature gradients or distorted collective entropy. The reproducible response of the system is then motions (shear motions, vortices, etc.), often called ther- to increase the entropy back to its maximum by irreversible modynamic forces. If free energies are very useful in equi- processes: the fluctuation cannot be reproduced with a siglibrium thermodynamics, it must be stressed that there is no nificant level of probability. Fluctuations about stable stageneral law defining stationary non-equilibrium properties tionary states are extremely small except near critical points of the energy as is the second law of thermodynamics for (Kondepudi and Prigogine 1998, page 323).[29] The stable the entropy in equilibrium thermodynamics. That is why stationary state has a local maximum of entropy and is loin such cases a more generalized Legendre transformation cally the most reproducible state of the system. There are should be considered. This is the extended Massieu poten- theorems about the irreversible dissipation of fluctuations. tial. By definition, the entropy (S) is a function of the col- Here 'local' means local with respect to the abstract space lection of extensive quantities Ei . Each extensive quantity of thermodynamic coordinates of state of the system. has a conjugate intensive variable Ii (a restricted definition If the stationary state is unstable, then any fluctuation will of intensive variable is used here by comparison to the defalmost surely trigger the virtually explosive departure of the inition given in this link) so that: system from the unstable stationary state. This can be accompanied by increased export of entropy. Ii = ∂S/∂Ei . We then define the extended Massieu function as follows:

1.5.5

∑ kb M = S − (Ii Ei ),

The scope of present-day non-equilibrium thermodynamics does not cover all physical processes. A condition for the validity of many studies in non-equilibrium thermodynamics of matter is that they deal with what is known as local thermodynamic equilibrium.

i

where kb is Boltzmann’s constant, whence

kb dM =

∑ (Ei dIi ). i

The independent variables are the intensities. Intensities are global values, valid for the system as a whole. When boundaries impose to the system different local conditions, (e.g. temperature differences), there are intensive variables representing the average value and others representing gradients or higher moments. The latter are the thermodynamic forces driving fluxes of extensive properties through the system. It may be shown that the Legendre transformation changes the maximum condition of the entropy (valid at equilibrium) in a minimum condition of the extended Massieu function for stationary states, no matter whether at equilibrium or not.

Local thermodynamic equilibrium

Local thermodynamic equilibrium of ponderable matter Local thermodynamic equilibrium of matter[7][14][23][24][25] (see also Keizer (1987)[26] means that conceptually, for study and analysis, the system can be spatially and temporally divided into 'cells’ or 'micro-phases’ of small (infinitesimal) size, in which classical thermodynamical equilibrium conditions for matter are fulfilled to good approximation. These conditions are unfulfilled, for example, in very rarefied gases, in which molecular collisions are infrequent; and in the boundary layers of a star, where radiation is passing energy to space; and for interacting fermions at very low temperature, where dissipative processes become ineffective. When these 'cells’ are defined, one admits that matter and energy may pass freely between contiguous 'cells’,

1.5. NON-EQUILIBRIUM THERMODYNAMICS

35

slowly enough to leave the 'cells’ in their respective individ- thermomechanics,[36][37][38][39] which evolved completely ual local thermodynamic equilibria with respect to intensive independently of statistical mechanics and maximumvariables. entropy principles. One can think here of two 'relaxation times’ separated by order of magnitude.[30] The longer relaxation time is of the order of magnitude of times taken for the macroscopic dynamical structure of the system to change. The shorter is of the order of magnitude of times taken for a single 'cell' to reach local thermodynamic equilibrium. If these two relaxation times are not well separated, then the classical non-equilibrium thermodynamical concept of local thermodynamic equilibrium loses its meaning[30] and other approaches have to be proposed, see for instance Extended irreversible thermodynamics. For example, in the atmosphere, the speed of sound is much greater than the wind speed; this favours the idea of local thermodynamic equilibrium of matter for atmospheric heat transfer studies at altitudes below about 60 km where sound propagates, but not above 100 km, where, because of the paucity of intermolecular collisions, sound does not propagate.

1.5.7

Flows and forces

The fundamental relation of classical equilibrium thermodynamics [40] ∑ µi 1 p dU + dV − dNi T T T i=1 s

dS =

expresses the change in entropy dS of a system as a function of the intensive quantities temperature T , pressure p and ith chemical potential µi and of the differentials of the extensive quantities energy U , volume V and ith particle number Ni .

Following Onsager (1931,I),[9] let us extend our considerations to thermodynamically non-equilibrium systems. As a basis, we need locally defined versions of the extensive Milne’s 1928 definition of local thermodynamic equi- macroscopic quantities U , V and Ni and of the intensive librium in terms of radiative equilibrium macroscopic quantities T , p and µi . Milne (1928),[31] thinking about stars, gave a definition of 'local thermodynamic equilibrium' in terms of the thermal radiation of the matter in each small local 'cell'. He defined 'local thermodynamic equilibrium' in a 'cell' by requiring that it macroscopically absorb and spontaneously emit radiation as if it were in radiative equilibrium in a cavity at the temperature of the matter of the 'cell'. Then it strictly obeys Kirchhoff’s law of equality of radiative emissivity and absorptivity, with a black body source function. The key to local thermodynamic equilibrium here is that the rate of collisions of ponderable matter particles such as molecules should far exceed the rates of creation and annihilation of photons.

1.5.6

Entropy in evolving systems

It is pointed out[32][33][34][35] by W.T. Grandy Jr that entropy, though it may be defined for a non-equilibrium system, is when strictly considered, only a macroscopic quantity that refers to the whole system, and is not a dynamical variable and in general does not act as a local potential that describes local physical forces. Under special circumstances, however, one can metaphorically think as if the thermal variables behaved like local physical forces. The approximation that constitutes classical irreversible thermodynamics is built on this metaphoric thinking.

For classical non-equilibrium studies, we will consider some new locally defined intensive macroscopic variables. We can, under suitable conditions, derive these new variables by locally defining the gradients and flux densities of the basic locally defined macroscopic quantities. Such locally defined gradients of intensive macroscopic variables are called 'thermodynamic forces’. They 'drive' flux densities, perhaps misleadingly often called 'fluxes’, which are dual to the forces. These quantities are defined in the article on Onsager reciprocal relations. Establishing the relation between such forces and flux densities is a problem in statistical mechanics. Flux densities ( Ji ) may be coupled. The article on Onsager reciprocal relations considers the stable near-steady thermodynamically non-equilibrium regime, which has dynamics linear in the forces and flux densities. In stationary conditions, such forces and associated flux densities are by definition time invariant, as also are the system’s locally defined entropy and rate of entropy production. Notably, according to Ilya Prigogine and others, when an open system is in conditions that allow it to reach a stable stationary thermodynamically non-equilibrium state, it organizes itself so as to minimize total entropy production defined locally. This is considered further below.

One wants to take the analysis to the further stage of describing the behaviour of surface and volume integrals of This point of view shares many points in common non-stationary local quantities; these integrals are macrowith the concept and the use of entropy in continuum scopic fluxes and production rates. In general the dynam-

36

CHAPTER 1. CHAPTER 1. INTRODUCTION

ics of these integrals are not adequately described by linear discussion of the possibilities for principles of extrema of equations, though in special cases they can be so described. entropy production and of dissipation of energy: Chapter 12 of Grandy (2008)[1] is very cautious, and finds difficulty in defining the 'rate of internal entropy production' in many 1.5.8 The Onsager relations cases, and finds that sometimes for the prediction of the course of a process, an extremum of the quantity called the Main article: Onsager reciprocal relations rate of dissipation of energy may be more useful than that of the rate of entropy production; this quantity appeared in [9] Following Section III of Rayleigh (1873),[8] Onsager (1931, Onsager’s 1931 origination of this subject. Other writI)[9] showed that in the regime where both the flows ( Ji ) ers have also felt that prospects for general global extremal are small and the thermodynamic forces ( Fi ) vary slowly, principles are clouded. Such writers include Glansdorff and the rate of creation of entropy (σ) is linearly related to the Prigogine (1971), Lebon, Jou and Casas-Vásquez (2008), and Šilhavý (1997). flows:

σ=

∑ i

Ji

∂Fi ∂xi

A recent proposal may perhaps by-pass those clouded prospects.[42][43]

1.5.10

Applications

1.5.11

See also

of

non-equilibrium

and the flows are related to the gradient of the forces, thermodynamics parametrized by a matrix of coefficients conventionally denoted L : Non-equilibrium thermodynamics has been successfully applied to describe biological processes such as protein folding/unfolding and transport through membranes. ∑ ∂Fj Lij Ji = Also, ideas from non-equilibrium thermodynamics and the ∂xj j informatic theory of entropy have been adapted to describe general economic systems.[44] [45] from which it follows that:

σ=

∑ i,j

Lij

∂Fi ∂Fj ∂xi ∂xj

• Dissipative system

The second law of thermodynamics requires that the matrix L be positive definite. Statistical mechanics considerations involving microscopic reversibility of dynamics imply that the matrix L is symmetric. This fact is called the Onsager reciprocal relations.

• Entropy production

1.5.9

Speculated extremal principles for non-equilibrium processes

• Autocatalytic reactions and order creation

Main article: Extremal principles in non-equilibrium thermodynamics

• Bogoliubov-Born-Green-Kirkwood-Yvon of equations

Until recently, prospects for useful extremal principles in this area have seemed clouded. C. Nicolis (1999)[41] concludes that one model of atmospheric dynamics has an attractor which is not a regime of maximum or minimum dissipation; she says this seems to rule out the existence of a global organizing principle, and comments that this is to some extent disappointing; she also points to the difficulty of finding a thermodynamically consistent form of entropy production. Another top expert offers an extensive

• Extremal principles in non-equilibrium thermodynamics • Self-organization

• Self-organizing criticality

• Boltzmann equation • Vlasov equation • Maxwell’s demon • Information entropy • Constructal theory • Spontaneous symmetry breaking

hierarchy

1.5. NON-EQUILIBRIUM THERMODYNAMICS

1.5.12

References

[1] Grandy, W.T., Jr (2008). [2] Lebon, G., Jou, D., Casas-Vázquez, J. (2008). Understanding Non-equilibrium Thermodynamics: Foundations, Applications, Frontiers, Springer-Verlag, Berlin, e-ISBN 978-3540-74252-4. [3] Lieb, E.H., Yngvason, J. (1999), p. 5. [4] Gyarmati, I. (1967/1970), pp. 8–12. [5] Callen, H.B. (1960/1985), § 4–2. [6] Glansdorff, P., Prigogine, I. (1971), Ch. II,§ 2. [7] Gyarmati, I. (1967/1970). [8] Strutt, J. W. (1871). “Some General Theorems relating to Vibrations”. Proceedings of the London Mathematical Society s1–4: 357–368. doi:10.1112/plms/s1-4.1.357. [9] Onsager, L. (1931). “Reciprocal relations in irreversible processes, I”. Physical Review 37 (4): 405–426. Bibcode:1931PhRv...37..405O. doi:10.1103/PhysRev.37.405. [10] Lavenda, B.H. (1978). Thermodynamics of Irreversible Processes, Macmillan, London, ISBN 0-333-21616-4. [11] Gyarmati, I. (1967/1970), pages 4-14. [12] Ziegler, H., (1983). An Introduction to Thermomechanics, North-Holland, Amsterdam, ISBN 0-444-86503-9. [13] Balescu, R. (1975). Equilibrium and Non-equilibrium Statistical Mechanics, Wiley-Interscience, New York, ISBN 0471-04600-0, Section 3.2, pages 64-72. [14] Glansdorff, P., Prigogine, I. (1971). Thermodynamic Theory of Structure, Stability, and Fluctuations, WileyInterscience, London, 1971, ISBN 0-471-30280-5. [15] Jou, D., Casas-Vázquez, J., Lebon, G. (1993). Extended Irreversible Thermodynamics, Springer, Berlin, ISBN 3-54055874-8, ISBN 0-387-55874-8. [16] Eu, B.C. (2002).

37

[20] Essex, C. (1984c). “Radiation and the violation of bilinearity in the irreversible thermodynamics of irreversible processes”. Planetary and Space Science 32 (8): 1035– 1043. Bibcode:1984P&SS...32.1035E. doi:10.1016/00320633(84)90060-6 [21] Prigogine, I., Defay, R. (1950/1954). Chemical Thermodynamics, Longmans, Green & Co, London, page 1. [22] De Groot, S.R., Mazur, P. (1962). Non-equilibrium Thermodynamics, North-Holland, Amsterdam. [23] Balescu, R. (1975). Equilibrium and Non-equilibrium Statistical Mechanics, John Wiley & Sons, New York, ISBN 0471-04600-0. [24] Mihalas, D., Weibel-Mihalas, B. (1984). Foundations of Radiation Hydrodynamics, Oxford University Press, New York, ISBN 0-19-503437-6. [25] Schloegl, F. (1989). Probability and Heat: Fundamentals of Thermostatistics, Freidr. Vieweg & Sohn, Brausnchweig, ISBN 3-528-06343-2. [26] Keizer, J. (1987). Statistical Thermodynamics of Nonequilibrium Processes, Springer-Verlag, New York, ISBN 0-38796501-7. [27] Kondepudi, D. (2008). Introduction to Modern Thermodynamics, Wiley, Chichester UK, ISBN 978-0-470-01598-8, pages 333-338. [28] Coleman, B.D., Noll, W. (1963). The thermodynamics of elastic materials with heat conduction and viscosity, Arch. Ration. Mach. Analysis, 13: 167–178. [29] Kondepudi, D., Prigogine, I, (1998). Modern Thermodynamics. From Heat Engines to Dissipative Structures, Wiley, Chichester, 1998, ISBN 0-471-97394-7. [30] Zubarev D. N.,(1974). Nonequilibrium Statistical Thermodynamics, translated from the Russian by P.J. Shepherd, New York, Consultants Bureau. ISBN 0-306-10895-X; ISBN 978-0-306-10895-2.

[17] Wildt, R. (1972). “Thermodynamics of the gray atmosphere. IV. Entropy transfer and production”. Astrophysical Journal 174: 69–77. Bibcode:1972ApJ...174...69W. doi:10.1086/151469

[31] Milne, E.A. (1928). “The effect of collisions on monochromatic radiative equilibrium”. Monthly Notices of the Royal Astronomical Society 88: 493–502. Bibcode:1928MNRAS..88..493M. doi:10.1093/mnras/88.6.493.

[18] Essex, C. (1984a). “Radiation and the irreversible thermodynamics of climate”. Journal of the Atmospheric Sciences 41 (12): 1985–1991. Bibcode:1984JAtS...41.1985E. doi:10.1175/15200469(1984)0412.0.CO;2.

[32] Grandy, W.T., Jr. (2004). “Time Evolution in Macroscopic Systems. I. Equations of Motion”. Foundations of Physics 34: 1. arXiv:condBibcode:2004FoPh...34....1G. mat/0303290. doi:10.1023/B:FOOP.0000012007.06843.ed.

[19] Essex, C. (1984b). “Minimum entropy production in the steady state and radiative transfer”. Astrophysical Journal 285: 279–293. Bibcode:1984ApJ...285..279E. doi:10.1086/162504

[33] Grandy, W.T., Jr. (2004). “Time Evolution in Macroscopic Systems. II. The Entropy”. Foundations of Physics 34: 21. arXiv:cond-mat/0303291. Bibcode:2004FoPh...34...21G. doi:10.1023/B:FOOP.0000012008.36856.c1.

38

[34] Grandy, W. T., Jr (2004). “Time Evolution in Macroscopic Systems. III: Selected Applications”. Foundations of Physics 34 (5): 771. Bibcode:2004FoPh...34..771G. doi:10.1023/B:FOOP.0000022187.45866.81. [35] Grandy 2004 see also . [36] Truesdell, Clifford (1984). Rational Thermodynamics (2 ed.). Springer. [37] Maugin, Gérard A. (2002). Continuum Thermomechanics. Kluwer. [38] Gurtin, Morton E. (2010). The Mechanics and Thermodynamics of Continua. Cambridge University Press. [39] Amendola, Giovambattista (2012). Thermodynamics of Materials with Memory: Theory and Applications. Springer. [40] W. Greiner, L. Neise, and H. Stöcker (1997), Thermodynamics and Statistical Mechanics (Classical Theoretical Physics) ,Springer-Verlag, New York, P85, 91, 101,108,116, ISBN 0-387-94299-8. [41] Nicolis, C. (1999). “Entropy production and dynamical complexity in a low-order atmospheric model”. Quarterly Journal of the Royal Meteorological Society 125 (557): 1859–1878. Bibcode:1999QJRMS.125.1859N. doi:10.1002/qj.49712555718. [42] Attard, P. (2012). “Optimising Principle for NonEquilibrium Phase Transitions and Pattern Formation with Results for Heat Convection”. arXiv:1208.5105. [43] Attard, P. (2012). Non-Equilibrium Thermodynamics and Statistical Mechanics: Foundations and Applications, Oxford University Press, Oxford UK, ISBN 978-0-19-966276-0. [44] Pokrovskii, Vladimir (2011). Econodynamics. The Theory of Social Production. http://www.springer.com/ physics/complexity/book/978-94-007-2095-4: Springer, Dordrecht-Heidelberg-London-New York. [45] Chen, Jing (2015). The Unity of Science and Economics: A New Foundation of Economic Theory. http://www.springer. com/us/book/9781493934645: Springer.

Bibliography of cited references • Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, ISBN 0-471-862568. • Eu, B.C. (2002). Generalized Thermodynamics. The Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers, Dordrecht, ISBN 1-4020-0788-4. • Glansdorff, P., Prigogine, I. (1971). Thermodynamic Theory of Structure, Stability, and Fluctuations, WileyInterscience, London, 1971, ISBN 0-471-30280-5.

CHAPTER 1. CHAPTER 1. INTRODUCTION • Grandy, W.T., Jr (2008). Entropy and the Time Evolution of Macroscopic Systems. Oxford University Press. ISBN 978-0-19-954617-6. • Gyarmati, I. (1967/1970). Non-equilibrium Thermodynamics. Field Theory and Variational Principles, translated from the Hungarian (1967) by E. Gyarmati and W.F. Heinz, Springer, Berlin. • Lieb, E.H., Yngvason, J. (1999). 'The physics and mathematics of the second law of thermodynamics’, Physics Reports, 310: 1–96. See also this.

1.5.13

Further reading

• Ziegler, Hans (1977): An introduction to Thermomechanics. North Holland, Amsterdam. ISBN 0-44411080-1. Second edition (1983) ISBN 0-444-865039. • Kleidon, A., Lorenz, R.D., editors (2005). Nonequilibrium Thermodynamics and the Production of Entropy, Springer, Berlin. ISBN 3-540-22495-5. • Prigogine, I. (1955/1961/1967). Introduction to Thermodynamics of Irreversible Processes. 3rd edition, Wiley Interscience, New York. • Zubarev D. N. (1974): Nonequilibrium Statistical Thermodynamics. New York, Consultants Bureau. ISBN 0-306-10895-X; ISBN 978-0-306-10895-2. • Keizer, J. (1987). Statistical Thermodynamics of Nonequilibrium Processes, Springer-Verlag, New York, ISBN 0-387-96501-7. • Zubarev D. N., Morozov V., Ropke G. (1996): Statistical Mechanics of Nonequilibrium Processes: Basic Concepts, Kinetic Theory. John Wiley & Sons. ISBN 3-05-501708-0. • Zubarev D. N., Morozov V., Ropke G. (1997): Statistical Mechanics of Nonequilibrium Processes: Relaxation and Hydrodynamic Processes. John Wiley & Sons. ISBN 3-527-40084-2. • Tuck, Adrian F. (2008). Atmospheric turbulence : a molecular dynamics perspective. Oxford University Press. ISBN 978-0-19-923653-4. • Grandy, W.T., Jr (2008). Entropy and the Time Evolution of Macroscopic Systems. Oxford University Press. ISBN 978-0-19-954617-6. • Kondepudi, D., Prigogine, I. (1998). Modern Thermodynamics: From Heat Engines to Dissipative Structures. John Wiley & Sons, Chichester. ISBN 0-471-973939.

1.5. NON-EQUILIBRIUM THERMODYNAMICS • de Groot S.R., Mazur P. (1984). Non-Equilibrium Thermodynamics (Dover). ISBN 0-486-64741-2

1.5.14

External links

• Stephan Herminghaus’ Dynamics of Complex Fluids Department at the Max Planck Institute for Dynamics and Self Organization • Non-equilibrium Statistical Thermodynamics applied to Fluid Dynamics and Laser Physics - 1992- book by Xavier de Hemptinne. • Nonequilibrium Thermodynamics of Small Systems PhysicsToday.org • Into the Cool - 2005 book by Dorion Sagan and Eric D. Schneider, on nonequilibrium thermodynamics and evolutionary theory. • Thermodynamics ‘‘beyond’’ local equilibrium

39

Chapter 2

Chapter 2. Laws of Thermodynamics 2.1 Zeroth law of Thermodynamics The zeroth law of thermodynamics states that if two thermodynamic systems are each in thermal equilibrium with a third, then they are in thermal equilibrium with each other.

a member of any other subset. This means that a unique “tag” can be assigned to every system, and if the “tags” of two systems are the same, they are in thermal equilibrium with each other, and if different, they are not. This property is used to justify the use of empirical temperature as a tagging system. Empirical temperature provides further relations of thermally equilibrated systems, such as order and continuity with regard to “hotness” or “coldness”, but these are not implied by the standard statement of the zeroth law.

Two systems are said to be in the relation of thermal equilibrium if they are linked by a wall permeable only to heat and they do not change over time.[1] As a convenience of language, systems are sometimes also said to be in a relation of If it is defined that a thermodynamic system is in thermal is reflexthermal equilibrium if they are not linked so as to be able equilibrium with itself (i.e., thermal equilibrium [6] ive), then the zeroth law may be stated as follows: to transfer heat to each other, but would not do so if they were connected by a wall permeable only to heat. Thermal If a body A, be in thermal equilibrium with equilibrium between two systems is a transitive relation. two other bodies, B and C, then B and C are in The physical meaning of the law was expressed by Maxwell thermal equilibrium with one another. in the words: “All heat is of the same kind”.[2] For this reason, another statement of the law is “All diathermal walls This statement asserts that thermal equilibrium is a leftare equivalent”.[3] Euclidean relation between thermodynamic systems. If we The law is important for the mathematical formulation of also define that every thermodynamic system is in thermal thermodynamics, which needs the assertion that the rela- equilibrium with itself, then thermal equilibrium is also a tion of thermal equilibrium is an equivalence relation. This reflexive relation. Binary relations that are both reflexive information is needed for a mathematical definition of tem- and Euclidean are equivalence relations. Thus, again imperature that will agree with the physical existence of valid plicitly assuming reflexivity, the zeroth law is therefore ofthermometers.[4] ten expressed as a right-Euclidean statement:[7]

2.1.1

If two systems are in thermal equilibrium with a third system, then they are in thermal equilibrium with each other.

Zeroth law as equivalence relation

A thermodynamic system is by definition in its own state of internal thermodynamic equilibrium, that is to say, there is no change in its observable state (i.e. macrostate) over time and no flows occur in it. One precise statement of the zeroth law is that the relation of thermal equilibrium is an equivalence relation on pairs of thermodynamic systems.[5] In other words, the set of all systems each in its own state of internal thermodynamic equilibrium may be divided into subsets in which every system belongs to one and only one subset, and is in thermal equilibrium with every other member of that subset, and is not in thermal equilibrium with

One consequence of an equivalence relationship is that the equilibrium relationship is symmetric: If A is in thermal equilibrium with B, then B is in thermal equilibrium with A. Thus we may say that two systems are in thermal equilibrium with each other, or that they are in mutual equilibrium. Another consequence of equivalence is that thermal equilibrium is a transitive relationship and is occasionally expressed as such:[4][8]

40

If A is in thermal equilibrium with B and if

2.1. ZEROTH LAW OF THERMODYNAMICS B is in thermal equilibrium with C, then A is in thermal equilibrium with C . A reflexive, transitive relationship does not guarantee an equivalence relationship. In order for the above statement to be true, both reflexivity and symmetry must be implicitly assumed. It is the Euclidean relationships which apply directly to thermometry. An ideal thermometer is a thermometer which does not measurably change the state of the system it is measuring. Assuming that the unchanging reading of an ideal thermometer is a valid “tagging” system for the equivalence classes of a set of equilibrated thermodynamic systems, then if a thermometer gives the same reading for two systems, those two systems are in thermal equilibrium, and if we thermally connect the two systems, there will be no subsequent change in the state of either one. If the readings are different, then thermally connecting the two systems will cause a change in the states of both systems and when the change is complete, they will both yield the same thermometer reading. The zeroth law provides no information regarding this final reading.

2.1.2

Foundation of temperature

The zeroth law establishes thermal equilibrium as an equivalence relationship. An equivalence relationship on a set (such as the set of all systems each in its own state of internal thermodynamic equilibrium) divides that set into a collection of distinct subsets (“disjoint subsets”) where any member of the set is a member of one and only one such subset. In the case of the zeroth law, these subsets consist of systems which are in mutual equilibrium. This partitioning allows any member of the subset to be uniquely “tagged” with a label identifying the subset to which it belongs. Although the labeling may be quite arbitrary,[9] temperature is just such a labeling process which uses the real number system for tagging. The zeroth law justifies the use of suitable thermodynamic systems as thermometers to provide such a labeling, which yield any number of possible empirical temperature scales, and justifies the use of the second law of thermodynamics to provide an absolute, or thermodynamic temperature scale. Such temperature scales bring additional continuity and ordering (i.e., “hot” and “cold”) properties to the concept of temperature.[7] In the space of thermodynamic parameters, zones of constant temperature form a surface, that provides a natural order of nearby surfaces. One may therefore construct a global temperature function that provides a continuous ordering of states. The dimensionality of a surface of constant temperature is one less than the number of thermodynamic parameters, thus, for an ideal gas described with three ther-

41 modynamic parameters P, V and N, it is a two-dimensional surface. For example, if two systems of ideal gases are in equilibrium, then P 1 V 1 /N 1 = P 2 V 2 /N 2 where Pi is the pressure in the ith system, Vi is the volume, and Ni is the amount (in moles, or simply the number of atoms) of gas. The surface PV/N = const defines surfaces of equal thermodynamic temperature, and one may label defining T so that PV/N = RT, where R is some constant. These systems can now be used as a thermometer to calibrate other systems. Such systems are known as “ideal gas thermometers”. In a sense, focused on in the zeroth law, there is only one kind of diathermal wall or one kind of heat, as expressed by Maxwell’s dictum that “All heat is of the same kind”.[2] But in another sense, heat is transferred in different ranks, as expressed by Sommerfeld’s dictum “Thermodynamics investigates the conditions that govern the transformation of heat into work. It teaches us to recognize temperature as the measure of the work-value of heat. Heat of higher temperature is richer, is capable of doing more work. Work may be regarded as heat of an infinitely high temperature, as unconditionally available heat.”[10] This is why temperature is the particular variable indicated by the zeroth law’s statement of equivalence.

2.1.3

Physical meaning of the usual statement of the zeroth law

The present article states the zeroth law as it is often summarized in textbooks. Nevertheless, this usual statement perhaps does not explicitly convey the full physical meaning that underlies it. The underlying physical meaning was perhaps first clarified by Maxwell in his 1871 textbook.[2] In Carathéodory’s (1909) theory, it is postulated that there exist walls “permeable only to heat”, though heat is not explicitly defined in that paper. This postulate is a physical postulate of existence. It does not, however, as worded just previously, say that there is only one kind of heat. This paper of Carathéodory states as proviso 4 of its account of such walls: “Whenever each of the systems S 1 and S 2 is made to reach equilibrium with a third system S 3 under identical conditions, systems S 1 and S 2 are in mutual equilibrium”.[11] It is the function of this statement in the paper, not there labeled as the zeroth law, to provide not only for the existence of transfer of energy other than by work or transfer of matter, but further to provide that such transfer is unique in the sense that there is only one kind of such wall, and one kind of such transfer. This is signaled in the postulate of this paper of Carathéodory that precisely one non-deformation variable is needed to complete the specification of a thermodynamic state, beyond the

42

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

necessary deformation variables, which are not restricted in namics’. Fowler, with co-author Edward A. Guggenheim, number. It is therefore not exactly clear what Carathéodory wrote of the zeroth law as follows: means when in the introduction of this paper he writes "It ...we introduce the postulate: If two is possible to develop the whole theory without assuming the assemblies are each in thermal equilibexistence of heat, that is of a quantity that is of a different rium with a third assembly, they are in nature from the normal mechanical quantities." thermal equilibrium with each other. Maxwell (1871) discusses at some length ideas which he summarizes by the words “All heat is of the same kind”.[2] They then proposed that “it may be shown to follow that Modern theorists sometimes express this idea by postulat- the condition for thermal equilibrium between several asing the existence of a unique one-dimensional hotness mani- semblies is the equality of a certain single-valued function fold, into which every proper temperature scale has a mono- of the thermodynamic states of the assemblies, which may tonic mapping.[12] This may be expressed by the statement be called the temperature t, any one of the assemblies bethat there is only one kind of temperature, regardless of the ing used as a “thermometer” reading the temperature t on variety of scales in which it is expressed. Another mod- a suitable scale. This postulate of the "Existence of temperern expression of this idea is that “All diathermal walls are ature" could with advantage be known as the zeroth law of equivalent”.[13] This might also be expressed by saying that thermodynamics". The first sentence of this present article there is precisely one kind of non-mechanical, non-matter- is a version of this statement.[18] It is not explicitly evident transferring contact equilibrium between thermodynamic in the existence statement of Fowler and Guggenheim that systems. temperature refers to a unique attribute of a state of a sysThese ideas may be regarded as helping to clarify the physical meaning of the usual statement of the zeroth law of thermodynamics. It is the opinion of Lieb and Yngvason (1999) that the derivation from statistical mechanics of the law of entropy increase is a goal that has so far eluded the deepest thinkers.[14] Thus the idea remains open to consideration that the existence of heat and temperature are needed as coherent primitive concepts for thermodynamics, as expressed, for example, by Maxwell and Planck. On the other hand, Planck in 1926 clarified how the second law can be stated without reference to heat or temperature, by referring to the irreversible and universal nature of friction in natural thermodynamic processes.[15]

tem, such as is expressed in the idea of the hotness manifold. Also their statement refers explicitly to statistical mechanical assemblies, not explicitly to macroscopic thermodynamically defined systems.

2.1.5

References

Citations [1] Carathéodory, C. (1909). [2] Maxwell, J.C. (1871), p. 57. [3] Bailyn, M. (1994), pp. 24, 144. [4] Lieb, E.H., Yngvason, J. (1999), p. 56.

2.1.4

History

[5] Lieb, E.H., Yngvason, J. (1999), p. 52. [6] Planck. M. (1914), p. 2.

According to Arnold Sommerfeld, Ralph H. Fowler invented the title 'the zeroth law of thermodynamics’ when he was discussing the 1935 text of Saha and Srivastava. They write on page 1 that “every physical quantity must be measurable in numerical terms”. They presume that temperature is a physical quantity and then deduce the statement “If a body A is in temperature equilibrium with two bodies B and C, then B and C themselves will be in temperature equilibrium with each other”. They then in a self-standing paragraph italicize as if to state their basic postulate: "Any of the physical properties of A which change with the application of heat may be observed and utilised for the measurement of temperature." They do not themselves here use the term 'zeroth law of thermodynamics’.[16][17] There are very many statements of these physical ideas in the physics literature long before this text, in very similar language. What was new here was just the label 'zeroth law of thermody-

[7] Buchdahl, H.A. (1966), p. 73. [8] Kondepudi, D. (2008), p. 7. [9] Dugdale, J.S. (1996), p. 35. [10] Sommerfeld, A. (1923), p. 36. [11] Carathéodory, C. (1909), Section 6. [12] Serrin, J. (1986), p. 6. [13] Bailyn, M. (1994), p. 23. [14] Lieb, E.H., Yngvason, J. (1999), p. 5. [15] Planck, M. (1926). [16] Sommerfeld, A. (1951/1955), p. 1. [17] Saha, M.N., Srivastava, B.N. (1935), p. 1. [18] Fowler, R., Guggenheim, E.A. (1939/1965), p. 56.

2.2. FIRST LAW OF THERMODYNAMICS Works cited • Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 978-0-88318-797-5.

43 • Sommerfeld, A. (1951/1955). Thermodynamics and Statistical Mechanics, vol. 5 of Lectures on Theoretical Physics, edited by F. Bopp, J. Meixner, translated by J. Kestin, Academic Press, New York.

• H.A. Buchdahl (1966). The Concepts of Classical 2.1.6 Thermodynamics. Cambridge University Press.

Further reading

• Atkins, Peter (2007). Four Laws That Drive the Uni• C. Carathéodory (1909). “Untersuchungen über verse. New York: Oxford University Press. ISBN die Grundlagen der Thermodynamik”. Math978-0-19-923236-9. ematische Annalen (in German) 67: 355–386. doi:10.1007/BF01450409. A translation may be found here. A partly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermody- 2.2 First law of Thermodynamics namics, Dowden, Hutchinson & Ross, Stroudsburg PA. The first law of thermodynamics is a version of the law of conservation of energy, adapted for thermodynamic sys• Dugdale, J. S. (1996). Entropy and its Physical Intertems. The law of conservation of energy states that the topretation. Taylor & Francis. ISBN 0-7484-0569-0. tal energy of an isolated system is constant; energy can be • Fowler, R., Guggenheim, E.A. (1939/1965). Statisti- transformed from one form to another, but cannot be crecal Thermodynamics. A version of Statistical Mechan- ated or destroyed. The first law is often formulated by statics for Students of Physics and Chemistry, first print- ing that the change in the internal energy of a closed system ing 1939, reprinted with corrections 1965, Cambridge is equal to the amount of heat supplied to the system, minus the amount of work done by the system on its surroundings. University Press, Cambridge UK. Equivalently, perpetual motion machines of the first kind • D. Kondepudi (2008). Introduction to Modern Ther- are impossible. modynamics. Wiley. ISBN 978-0470-01598-8. • Lieb, E.H., Yngvason, J. (1999). The physics and 2.2.1 History mathematics of the second law of thermodynamics, Physics Reports, 310: 1–96. Investigations into the nature of heat and work and their re• Maxwell, J.C. (1871). Theory of Heat, Longmans, lationship began with the invention of the first engines used to extract water from mines. Improvements to such engines Green, and Co., London. so as to increase their efficiency and power output came first • Planck. M. (1914). The Theory of Heat Radiation, a from mechanics that tinkered with such machines but only translation by Masius, M. of the second German edi- slowly advanced the art. Deeper investigations that placed tion, P. Blakiston’s Son & Co., Philadelphia. those on a mathematical and physics basis came later. • Planck, M. (1926). Über die Begründing des zweiten The process of development of the first law of thermodyHauptsatzes der Thermodynamik, S.B. Preuß. Akad. namics was by way of much investigative trial and error Wiss. phys. math. Kl.: 453–463. over a period of about half a century. The first full statements of the law came in 1850 from Rudolf Clausius and • Saha, M.N., Srivastava, B.N. (1935). A Treatise on from William Rankine; Rankine’s statement was perhaps Heat. (Including Kinetic Theory of Gases, Thermody- not quite as clear and distinct as was Clausius’.[1] A main namics and Recent Advances in Statistical Thermody- aspect of the struggle was to deal with the previously pronamics), the second and revised edition of A Text Book posed caloric theory of heat. of Heat, The Indian Press, Allahabad and Calcutta. Germain Hess in 1840 stated a conservation law for the so• Serrin, J. (1986). Chapter 1, 'An Outline of Ther- called 'heat of reaction' for chemical reactions.[2] His law modynamical Structure', pages 3–32, in New Perspec- was later recognized as a consequence of the first law of tives in Thermodynamics, edited by J. Serrin, Springer, thermodynamics, but Hess’s statement was not explicitly Berlin, ISBN 3-540-15931-2. concerned with the relation between energy exchanges by • Sommerfeld, A. (1923). Atomic Structure and Spec- heat and work. tral Lines, translated from the third German edition According to Truesdell (1980), Julius Robert von Mayer in 1841 made a statement that meant that “in a process at conby H.L. Brose, Methuen, London.

44

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

stant pressure, the heat used to produce expansion is univer- The concept of internal energy is considered by Bailyn to sally interconvertible with work”, but this is not a general be of “enormous interest”. Its quantity cannot be immedistatement of the first law.[3][4] ately measured, but can only be inferred, by differencing actual immediate measurements. Bailyn likens it to the energy states of an atom, that were revealed by Bohr’s energy Original statements: the “thermodynamic approach” relation hν = En'' − En'. In each case, an unmeasurable quantity (the internal energy, the atomic energy level) is reThe original nineteenth century statements of the first law vealed by considering the difference of measured quantities of thermodynamics appeared in a conceptual framework in (increments of internal energy, quantities of emitted or abwhich transfer of energy as heat was taken as a primitive sorbed radiative energy).[8] notion, not defined or constructed by the theoretical development of the framework, but rather presupposed as prior to it and already accepted. The primitive notion of heat was taken as empirically established, especially through Conceptual revision: the “mechanical approach” calorimetry regarded as a subject in its own right, prior to thermodynamics. Jointly primitive with this notion of In 1907, George H. Bryan wrote about systems between heat were the notions of empirical temperature and thermal which there is no transfer of matter (closed systems): equilibrium. This framework also took as primitive the no- "Definition. When energy flows from one system or part tion of transfer of energy as work. This framework did not of a system to another otherwise than by the performance presume a concept of energy in general, but regarded it as of mechanical work, the energy so transferred is called derived or synthesized from the prior notions of heat and heat.”[9] This definition may be regarded as expressing a work. By one author, this framework has been called the conceptual revision, as follows. This was systematically expounded in 1909 by Constantin Carathéodory, whose “thermodynamic” approach.[5] attention had been drawn to it by Max Born. Largely The first explicit statement of the first law of thermodynamthrough Born’s[10] influence, this revised conceptual apics, by Rudolf Clausius in 1850, referred to cyclic thermoproach to the definition of heat came to be preferred by dynamic processes. many twentieth-century writers. It might be called the “mechanical approach”.[11] In all cases in which work is produced by the agency of heat, a quantity of heat is consumed which is proportional to the work done; and conversely, by the expenditure of an equal quantity of work an equal quantity of heat is produced.[6]

Energy can also be transferred from one thermodynamic system to another in association with transfer of matter. Born points out that in general such energy transfer is not resolvable uniquely into work and heat moieties. In general, when there is transfer of energy associated with matter transfer, work and heat transfers can be distinguished only when they pass through walls physically separate from those Clausius also stated the law in another form, referring to the for matter transfer. existence of a function of state of the system, the internal energy, and expressed it in terms of a differential equa- The “mechanical” approach postulates the law of conservation for the increments of a thermodynamic process.[7] This tion of energy. It also postulates that energy can be transferred from one thermodynamic system to another adiabatiequation may described as follows: cally as work, and that energy can be held as the internal energy of a thermodynamic system. It also postulates that enIn a thermodynamic process involving ergy can be transferred from one thermodynamic system to a closed system, the increment in the another by a path that is non-adiabatic, and is unaccompainternal energy is equal to the differnied by matter transfer. Initially, it “cleverly” (according to ence between the heat accumulated by Bailyn) refrains from labelling as 'heat' such non-adiabatic, the system and the work done by it. unaccompanied transfer of energy. It rests on the primitive Because of its definition in terms of increments, the value notion of walls, especially adiabatic walls and non-adiabatic of the internal energy of a system is not uniquely defined. walls, defined as follows. Temporarily, only for purpose of It is defined only up to an arbitrary additive constant of in- this definition, one can prohibit transfer of energy as work tegration, which can be adjusted to give arbitrary reference across a wall of interest. Then walls of interest fall into two zero levels. This non-uniqueness is in keeping with the ab- classes, (a) those such that arbitrary systems separated by stract mathematical nature of the internal energy. The in- them remain independently in their own previously estabternal energy is customarily stated relative to a convention- lished respective states of internal thermodynamic equilibrium; they are defined as adiabatic; and (b) those without ally chosen standard reference state of the system.

2.2. FIRST LAW OF THERMODYNAMICS such independence; they are defined as non-adiabatic.[12] This approach derives the notions of transfer of energy as heat, and of temperature, as theoretical developments, not taking them as primitives. It regards calorimetry as a derived theory. It has an early origin in the nineteenth century, for example in the work of Helmholtz,[13] but also in the work of many others.[5]

2.2.2

45 fer of matter, and it has been widely followed in textbooks (examples:[17][18][19] ). Born observes that a transfer of matter between two systems is accompanied by a transfer of internal energy that cannot be resolved into heat and work components. There can be pathways to other systems, spatially separate from that of the matter transfer, that allow heat and work transfer independent of and simultaneous with the matter transfer. Energy is conserved in such transfers.

Conceptually revised statement, according to the mechanical approach 2.2.3

Description

The revised statement of the first law postulates that a change in the internal energy of a system due to any arbitrary process, that takes the system from a given initial thermodynamic state to a given final equilibrium thermodynamic state, can be determined through the physical existence, for those given states, of a reference process that occurs purely through stages of adiabatic work.

The first law of thermodynamics for a closed system was expressed in two ways by Clausius. One way referred to cyclic processes and the inputs and outputs of the system, but did not refer to increments in the internal state of the system. The other way referred to an incremental change in the internal state of the system, and did not expect the process to be cyclic.

The revised statement is then

A cyclic process is one that can be repeated indefinitely often, returning the system to its initial state. Of particular interest for single cycle of a cyclic process are the net work done, and the net heat taken in (or 'consumed', in Clausius’ statement), by the system.

For a closed system, in any arbitrary process of interest that takes it from an initial to a final state of internal thermodynamic equilibrium, the change of internal energy is the same as that for a reference adiabatic work process that links those two states. This is so regardless of the path of the process of interest, and regardless of whether it is an adiabatic or a nonadiabatic process. The reference adiabatic work process may be chosen arbitrarily from amongst the class of all such processes. This statement is much less close to the empirical basis than are the original statements,[14] but is often regarded as conceptually parsimonious in that it rests only on the concepts of adiabatic work and of non-adiabatic processes, not on the concepts of transfer of energy as heat and of empirical temperature that are presupposed by the original statements. Largely through the influence of Max Born, it is often regarded as theoretically preferable because of this conceptual parsimony. Born particularly observes that the revised approach avoids thinking in terms of what he calls the “imported engineering” concept of heat engines.[10]

In a cyclic process in which the system does net work on its surroundings, it is observed to be physically necessary not only that heat be taken into the system, but also, importantly, that some heat leave the system. The difference is the heat converted by the cycle into work. In each repetition of a cyclic process, the net work done by the system, measured in mechanical units, is proportional to the heat consumed, measured in calorimetric units. The constant of proportionality is universal and independent of the system and in 1845 and 1847 was measured by James Joule, who described it as the mechanical equivalent of heat. In a non-cyclic process, the change in the internal energy of a system is equal to net energy added as heat to the system minus the net work done by the system, both being measured in mechanical units. Taking ΔU as a change in internal energy, one writes ∆U = Q − W (sign convention of Clausius and generally in this article) , where Q denotes the net quantity of heat supplied to the system by its surroundings and W denotes the net work done by the system. This sign convention is implicit in Clausius’ statement of the law given above. It originated with the study of heat engines that produce useful work by consumption of heat.

Basing his thinking on the mechanical approach, Born in 1921, and again in 1949, proposed to revise the definition of heat.[10][15] In particular, he referred to the work of Constantin Carathéodory, who had in 1909 stated the first law without defining quantity of heat.[16] Born’s defini- Often nowadays, however, writers use the IUPAC convention was specifically for transfers of energy without trans- tion by which the first law is formulated with work done on

46

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

the system by its surroundings having a positive sign. With are made on it in the section below headed 'First law of therthis now often used sign convention for work, the first law modynamics for open systems’. for a closed system may be written: There are two main ways of stating a law of thermodynamics, physically or mathematically. They should be logically ∆U = Q + W (sign convention of IUPAC) . coherent and consistent with one another.[23] [20]

This convention follows physicists such as Max Planck,[21] and considers all net energy transfers to the system as positive and all net energy transfers from the system as negative, irrespective of any use for the system as an engine or other device.

An example of a physical statement is that of Planck (1897/1903): It is in no way possible, either by mechanical, thermal, chemical, or other devices, to obtain perpetual motion, i.e. it is impossible to construct an engine which will work in a cycle and produce continuous work, or kinetic energy, from nothing.[24]

When a system expands in a fictive quasistatic process, the work done by the system on the environment is the product, P dV, of pressure, P, and volume change, dV, whereas the work done on the system is -P dV. Using either sign conven- This physical statement is restricted neither to closed systion for work, the change in internal energy of the system tems nor to systems with states that are strictly defined only for thermodynamic equilibrium; it has meaning also for is: open systems and for systems with states that are not in thermodynamic equilibrium. dU = δQ − P dV process) (quasi-static, where δQ denotes the infinitesimal increment of heat supplied to the system from its surroundings. Work and heat are expressions of actual physical processes of supply or removal of energy, while the internal energy U is a mathematical abstraction that keeps account of the exchanges of energy that befall the system. Thus the term heat for Q means “that amount of energy added or removed by conduction of heat or by thermal radiation”, rather than referring to a form of energy within the system. Likewise, the term work energy for W means “that amount of energy gained or lost as the result of work”. Internal energy is a property of the system whereas work done and heat supplied are not. A significant result of this distinction is that a given internal energy change ΔU can be achieved by, in principle, many combinations of heat and work.

2.2.4

Various statements of the law for closed systems

The law is of great importance and generality and is consequently thought of from several points of view. Most careful textbook statements of the law express it for closed systems. It is stated in several ways, sometimes even by the same author.[5][22] For the thermodynamics of closed systems, the distinction between transfers of energy as work and as heat is central and is within the scope of the present article. For the thermodynamics of open systems, such a distinction is beyond the scope of the present article, but some limited comments

An example of a mathematical statement is that of Crawford (1963): For a given system we let ΔE kin = large-scale mechanical energy, ΔE pot = large-scale potential energy, and ΔE tot = total energy. The first two quantities are specifiable in terms of appropriate mechanical variables, and by definition

E tot = E kin + E pot + U . For any finite process, whether reversible or irreversible,

2.2. FIRST LAW OF THERMODYNAMICS

∆E tot = ∆E kin + ∆E pot + ∆U . The first law in a form that involves the principle of conservation of energy more generally is

∆E tot = Q + W . Here Q and W are heat and work added, with no restrictions as to whether the process is reversible, quasistatic, or irreversible.[Warner, Am. J. Phys., 29, 124 (1961)][25] This statement by Crawford, for W, uses the sign convention of IUPAC, not that of Clausius. Though it does not explicitly say so, this statement refers to closed systems, and to internal energy U defined for bodies in states of thermodynamic equilibrium, which possess well-defined temperatures. The history of statements of the law for closed systems has two main periods, before and after the work of Bryan (1907),[26] of Carathéodory (1909),[16] and the approval of Carathéodory’s work given by Born (1921).[15] The earlier traditional versions of the law for closed systems are nowadays often considered to be out of date.

47 in equilibrium is a function of state, that the sum of the internal energies of the phases is the total internal energy of the system, and that the value of the total internal energy of the system is changed by the amount of work done adiabatically on it, considering work as a form of energy. That article considered this statement to be an expression of the law of conservation of energy for such systems. This version is nowadays widely accepted as authoritative, but is stated in slightly varied ways by different authors. Such statements of the first law for closed systems assert the existence of internal energy as a function of state defined in terms of adiabatic work. Thus heat is not defined calorimetrically or as due to temperature difference. It is defined as a residual difference between change of internal energy and work done on the system, when that work does not account for the whole of the change of internal energy and the system is not adiabatically isolated.[17][18][19] The 1909 Carathéodory statement of the law in axiomatic form does not mention heat or temperature, but the equilibrium states to which it refers are explicitly defined by variable sets that necessarily include “nondeformation variables”, such as pressures, which, within reasonable restrictions, can be rightly interpreted as empirical temperatures,[27] and the walls connecting the phases of the system are explicitly defined as possibly impermeable to heat or permeable only to heat. According to Münster (1970), “A somewhat unsatisfactory aspect of Carathéodory’s theory is that a consequence of the Second Law must be considered at this point [in the statement of the first law], i.e. that it is not always possible to reach any state 2 from any other state 1 by means of an adiabatic process.” Münster instances that no adiabatic process can reduce the internal energy of a system at constant volume.[17] Carathéodory’s paper asserts that its statement of the first law corresponds exactly to Joule’s experimental arrangement, regarded as an instance of adiabatic work. It does not point out that Joule’s experimental arrangement performed essentially irreversible work, through friction of paddles in a liquid, or passage of electric current through a resistance inside the system, driven by motion of a coil and inductive heating, or by an external current source, which can access the system only by the passage of electrons, and so is not strictly adiabatic, because electrons are a form of matter, which cannot penetrate adiabatic walls. The paper goes on to base its main argument on the possibility of quasi-static adiabatic work, which is essentially reversible. The paper asserts that it will avoid reference to Carnot cycles, and then proceeds to base its argument on cycles of forward and backward quasi-static adiabatic stages, with isothermal stages of zero magnitude.

Carathéodory’s celebrated presentation of equilibrium thermodynamics[16] refers to closed systems, which are allowed to contain several phases connected by internal walls of various kinds of impermeability and permeability (explicitly including walls that are permeable only to heat). Carathéodory’s 1909 version of the first law of thermodynamics was stated in an axiom which refrained from defin- Sometimes the concept of internal energy is not made exing or mentioning temperature or quantity of heat trans- plicit in the statement.[28][29][30] ferred. That axiom stated that the internal energy of a phase

48 Sometimes the existence of the internal energy is made explicit but work is not explicitly mentioned in the statement of the first postulate of thermodynamics. Heat supplied is then defined as the residual change in internal energy after work has been taken into account, in a non-adiabatic process.[31] A respected modern author states the first law of thermodynamics as “Heat is a form of energy”, which explicitly mentions neither internal energy nor adiabatic work. Heat is defined as energy transferred by thermal contact with a reservoir, which has a temperature, and is generally so large that addition and removal of heat do not alter its temperature.[32] A current student text on chemistry defines heat thus: "heat is the exchange of thermal energy between a system and its surroundings caused by a temperature difference.” The author then explains how heat is defined or measured by calorimetry, in terms of heat capacity, specific heat capacity, molar heat capacity, and temperature.[33] A respected text disregards the Carathéodory’s exclusion of mention of heat from the statement of the first law for closed systems, and admits heat calorimetrically defined along with work and internal energy.[34] Another respected text defines heat exchange as determined by temperature difference, but also mentions that the Born (1921) version is “completely rigorous”.[35] These versions follow the traditional approach that is now considered out of date, exemplified by that of Planck (1897/1903).[36]

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS Adiabatic processes In an adiabatic process, there is transfer of energy as work but not as heat. For all adiabatic process that takes a system from a given initial state to a given final state, irrespective of how the work is done, the respective eventual total quantities of energy transferred as work are one and the same, determined just by the given initial and final states. The work done on the system is defined and measured by changes in mechanical or quasi-mechanical variables external to the system. Physically, adiabatic transfer of energy as work requires the existence of adiabatic enclosures. For instance, in Joule’s experiment, the initial system is a tank of water with a paddle wheel inside. If we isolate the tank thermally, and move the paddle wheel with a pulley and a weight, we can relate the increase in temperature with the distance descended by the mass. Next, the system is returned to its initial state, isolated again, and the same amount of work is done on the tank using different devices (an electric motor, a chemical battery, a spring,...). In every case, the amount of work can be measured independently. The return to the initial state is not conducted by doing adiabatic work on the system. The evidence shows that the final state of the water (in particular, its temperature and volume) is the same in every case. It is irrelevant if the work is electrical, mechanical, chemical,... or if done suddenly or slowly, as long as it is performed in an adiabatic way, that is to say, without heat transfer into or out of the system.

Evidence of this kind shows that to increase the temperature of the water in the tank, the qualitative kind of adiabatically performed work does not matter. No qualitative kind of adiabatic work has ever been observed to decrease 2.2.5 Evidence for the first law of thermody- the temperature of the water in the tank. A change from one state to another, for example an innamics for closed systems crease of both temperature and volume, may be conducted in several stages, for example by externally supplied electriThe first law of thermodynamics for closed systems was cal work on a resistor in the body, and adiabatic expansion originally induced from empirically observed evidence, in- allowing the body to do work on the surroundings. It needs cluding calorimetric evidence. It is nowadays, however, to be shown that the time order of the stages, and their reltaken to provide the definition of heat via the law of con- ative magnitudes, does not affect the amount of adiabatic servation of energy and the definition of work in terms of work that needs to be done for the change of state. Accordchanges in the external parameters of a system. The original ing to one respected scholar: “Unfortunately, it does not discovery of the law was gradual over a period of perhaps seem that experiments of this kind have ever been carried half a century or more, and some early studies were in terms out carefully. ... We must therefore admit that the statement which we have enunciated here, and which is equivalent to of cyclic processes.[1] is not well founded on diThe following is an account in terms of changes of state of the first law of thermodynamics, [14] rect experimental evidence.” Another expression of this a closed system through compound processes that are not view is "... no systematic precise experiments to verify this necessarily cyclic. This account first considers processes for [37] generalization directly have ever been attempted.” which the first law is easily verified because of their simplicity, namely adiabatic processes (in which there is no transfer This kind of evidence, of independence of sequence of as heat) and adynamic processes (in which there is no trans- stages, combined with the above-mentioned evidence, of fer as work). independence of qualitative kind of work, would show the

2.2. FIRST LAW OF THERMODYNAMICS existence of an important state variable that corresponds with adiabatic work, but not that such a state variable represented a conserved quantity. For the latter, another step of evidence is needed, which may be related to the concept of reversibility, as mentioned below.

49 slowly, the frictional or viscous dissipation is less. In the limit of infinitely slow performance, the dissipation tends to zero and then the limiting process, though fictional rather than actual, is notionally reversible, and is called quasi-static. Throughout the course of the fictional limiting quasi-static process, the internal intensive variables of the system are equal to the external intensive variables, those that describe the reactive forces exerted by the surroundings.[44] This can be taken to justify the formula

That important state variable was first recognized and denoted U by Clausius in 1850, but he did not then name it, and he defined it in terms not only of work but also of heat transfer in the same process. It was also independently recognized in 1850 by Rankine, who also denoted it U ; and in 1851 by Kelvin who then called it “mechanical energy”, quasi-static adiabatic, quasi-static adiabatic, = −WO→A . and later “intrinsic energy”. In 1865, after some hestitation, (1) WA→O Clausius began calling his state function U “energy”. In • Another way to deal with it is to allow that experiments 1882 it was named as the internal energy by Helmholtz.[38] with processes of heat transfer to or from the system If only adiabatic processes were of interest, and heat could may be used to justify the formula (1) above. Morebe ignored, the concept of internal energy would hardly over, it deals to some extent with the problem of lack arise or be needed. The relevant physics would be largely of direct experimental evidence that the time order of covered by the concept of potential energy, as was intended stages of a process does not matter in the determinain the 1847 paper of Helmholtz on the principle of consertion of internal energy. This way does not provide thevation of energy, though that did not deal with forces that oretical purity in terms of adiabatic work processes, cannot be described by a potential, and thus did not fully but is empirically feasible, and is in accord with exjustify the principle. Moreover, that paper was critical of [39] periments actually done, such as the Joule experiments the early work of Joule that had by then been performed. mentioned just above, and with older traditions. A great merit of the internal energy concept is that it frees thermodynamics from a restriction to cyclic processes, and The formula (1) above allows that to go by processes of allows a treatment in terms of thermodynamic states. quasi-static adiabatic work from the state A to the state B In an adiabatic process, adiabatic work takes the system ei- we can take a path that goes through the reference state O ther from a reference state O with internal energy U (O) to , since the quasi-static adiabatic work is independent of the an arbitrary one A with internal energy U (A) , or from the path state A to the state O :

adiabatic adiabatic U (A) = U (O)−WO→A or U (O) = U (A)−WA→O .

adiabatic, quasi−static adiabatic, quasi−static adiabatic, quasi−static adiab −WA→B = −WA→O −WO→B = WO→A

This kind of empirical evidence, coupled with theory of this Except under the special, and strictly speaking, fictional, kind, largely justifies the following statement: condition of reversibility, only one of the processes adiabatic, O → A or adiabatic, A → O is empirically feaFor all adiabatic processes between two specified sible by a simple application of externally supplied work. states of a closed system of any nature, the net The reason for this is given as the second law of thermodywork done is the same regardless the details of the namics and is not considered in the present article. process, and determines a state function called inThe fact of such irreversibility may be dealt with in two ternal energy, U .” main ways, according to different points of view: • Since the work of Bryan (1907), the most accepted way to deal with it nowadays, followed by Carathéodory,[16][19][40] is to rely on the previously established concept of quasi-static processes,[41][42][43] as follows. Actual physical processes of transfer of energy as work are always at least to some degree irreversible. The irreversibility is often due to mechanisms known as dissipative, that transform bulk kinetic energy into internal energy. Examples are friction and viscosity. If the process is performed more

Adynamic processes See also: Thermodynamic processes A complementary observable aspect of the first law is about heat transfer. Adynamic transfer of energy as heat can be measured empirically by changes in the surroundings of the system of interest by calorimetry. This again requires the existence of adiabatic enclosure of the entire process, system and surroundings, though the separating wall between

50 the surroundings and the system is thermally conductive or radiatively permeable, not adiabatic. A calorimeter can rely on measurement of sensible heat, which requires the existence of thermometers and measurement of temperature change in bodies of known sensible heat capacity under specified conditions; or it can rely on the measurement of latent heat, through measurement of masses of material that change phase, at temperatures fixed by the occurrence of phase changes under specified conditions in bodies of known latent heat of phase change. The calorimeter can be calibrated by adiabatically doing externally determined work on it. The most accurate method is by passing an electric current from outside through a resistance inside the calorimeter. The calibration allows comparison of calorimetric measurement of quantity of heat transferred with quantity of energy transferred as work. According to one textbook, “The most common device for measuring ∆U is an adiabatic bomb calorimeter.”[45] According to another textbook, “Calorimetry is widely used in present day laboratories.”[46] According to one opinion, “Most thermodynamic data come from calorimetry...”[47] According to another opinion, “The most common method of measuring “heat” is with a calorimeter.”[48]

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS This combined statement is the expression the first law of thermodynamics for reversible processes for closed systems. In particular, if no work is done on a thermally isolated closed system we have

∆U = 0 This is one aspect of the law of conservation of energy and can be stated: The internal energy of an isolated system remains constant. General case for irreversible processes

If, in a process of change of state of a closed system, the energy transfer is not under a practically zero temperature gradient and practically frictionless, then the process is irreversible. Then the heat and work transfers may be diffiWhen the system evolves with transfer of energy as heat, cult to calculate, and irreversible thermodynamics is called without energy being transferred as work, in an adynamic for. Nevertheless, the first law still holds and provides a of the work process,[49] the heat transferred to the system is equal to the check on the measurements and calculations path P1 , irreversible done irreversibly on the system, W , and the A→B increase in its internal energy: P1 , irreversible heat transferred irreversibly to the system, Qpath A→B , which belong to the same particular process defined by its particular irreversible path, P1 , through the space of therQadynamic A→B = ∆U . modynamic states. General case for reversible processes Heat transfer is practically reversible when it is driven by practically negligibly small temperature gradients. Work transfer is practically reversible when it occurs so slowly that there are no frictional effects within the system; frictional effects outside the system should also be zero if the process is to be globally reversible. For a particular reversible process in general, the work done reversibly on path P0 , reversible the system, WA→B , and the heat transferred reP0 , reversible versibly to the system, Qpath are not required to A→B occur respectively adiabatically or adynamically, but they must belong to the same particular process defined by its particular reversible path, P0 , through the space of thermodynamic states. Then the work and heat transfers can occur and be calculated simultaneously.

path P1 , irreversible P1 , irreversible −WA→B + Qpath = ∆U . A→B

This means that the internal energy U is a function of state and that the internal energy change ∆U between two states is a function only of the two states. Overview of the weight of evidence for the law

The first law of thermodynamics is so general that its predictions cannot all be directly tested. In many properly conducted experiments it has been precisely supported, and never violated. Indeed, within its scope of applicability, the law is so reliably established, that, nowadays, rather than experiment being considered as testing the accuracy of the law, it is more practical and realistic to think of the law as Putting the two complementary aspects together, the first testing the accuracy of experiment. An experimental result law for a particular reversible process can be written that seems to violate the law may be assumed to be inaccurate or wrongly conceived, for example due to failure to account for an important physical factor. Thus, some may path P0 , reversible path P0 , reversible −WA→B + QA→B = ∆U . regard it as a principle more abstract than a law.

2.2. FIRST LAW OF THERMODYNAMICS

2.2.6

51

State functional formulation for in- which the defining state variables are S and V, with respect to which T and P are partial derivatives of U.[50][51][52] It finitesimal processes

When the heat and work transfers in the equations above are infinitesimal in magnitude, they are often denoted by δ, rather than exact differentials denoted by d, as a reminder that heat and work do not describe the state of any system. The integral of an inexact differential depends upon the particular path taken through the space of thermodynamic parameters while the integral of an exact differential depends only upon the initial and final states. If the initial and final states are the same, then the integral of an inexact differential may or may not be zero, but the integral of an exact differential is always zero. The path taken by a thermodynamic system through a chemical or physical change is known as a thermodynamic process. The first law for a closed homogeneous system may be stated in terms that include concepts that are established in the second law. The internal energy U may then be expressed as a function of the system’s defining state variables S, entropy, and V, volume: U = U (S, V). In these terms, T, the system’s temperature, and P, its pressure, are partial derivatives of U with respect to S and V. These variables are important throughout thermodynamics, though not necessary for the statement of the first law. Rigorously, they are defined only when the system is in its own state of internal thermodynamic equilibrium. For some purposes, the concepts provide good approximations for scenarios sufficiently near to the system’s internal thermodynamic equilibrium.

is only in the fictive reversible case, when isochoric work is excluded, that the work done and heat transferred are given by −P dV and T dS.

In the case of a closed system in which the particles of the system are of different types and, because chemical reactions may occur, their respective numbers are not necessarily constant, the fundamental thermodynamic relation for dU becomes: dU = T dS − P dV +



µi dNi .

i

where dNᵢ is the (small) increase in amount of type-i particles in the reaction, and μᵢ is known as the chemical potential of the type-i particles in the system. If dNᵢ is expressed in mol then μᵢ is expressed in J/mol. If the system has more external mechanical variables than just the volume that can change, the fundamental thermodynamic relation further generalizes to: dU = T dS −

∑ i

Xi dxi +



µj dNj .

j

Here the Xᵢ are the generalized forces corresponding to the external variables xᵢ. The parameters Xᵢ are independent of the size of the system and are called intensive parameters and the xᵢ are proportional to the size and called extensive parameters.

The first law requires that:

For an open system, there can be transfers of particles as well as energy into or out of the system during a process. this case, the(closed first law of thermodynamics still holds, in dU = δQ−δW irreversible). or quasi-static process,For general system, the form that the internal energy is a function of state and Then, for the fictive case of a reversible process, dU can be the change of internal energy in a process is a function only written in terms of exact differentials. One may imagine re- of its initial and final states, as noted in the section below versible changes, such that there is at each instant negligible headed First law of thermodynamics for open systems. departure from thermodynamic equilibrium within the sysA useful idea from mechanics is that the energy gained by tem. This excludes isochoric work. Then, mechanical work a particle is equal to the force applied to the particle mulis given by δW = - P dV and the quantity of heat added can tiplied by the displacement of the particle while that force be expressed as δQ = T dS. For these conditions is applied. Now consider the first law without the heating term: dU = -PdV. The pressure P can be viewed as a force (and in fact has units of force per unit area) while dVis the dU = T ds−P dV process). reversible system, (closed displacement (with units of distance times area). We may While this has been shown here for reversible changes, it is say, with respect to this work term, that a pressure differvalid in general, as U can be considered as a thermodynamic ence forces a transfer of volume, and that the product of state function of the defining state variables S and V: the two (work) is the amount of energy transferred out of the system as a result of the process. If one were to make this term negative then this would be the work done on the (2) dU = T dS−P dV irreversible). or quasi-static process, general system, (closed system. Equation (2) is known as the fundamental thermodynamic It is useful to view the TdS term in the same light: here the relation for a closed system in the energy representation, for temperature is known as a “generalized” force (rather than

52

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

pot an actual mechanical force) and the entropy is a generalized of interaction E12 between the subsystems. Thus, in an obdisplacement. vious notation, one may write

Similarly, a difference in chemical potential between groups of particles in the system drives a chemical reaction that changes the numbers of particles, and the corresponding product is the amount of chemical potential energy transformed in process. For example, consider a system consisting of two phases: liquid water and water vapor. There is a generalized “force” of evaporation that drives water molecules out of the liquid. There is a generalized “force” of condensation that drives vapor molecules out of the vapor. Only when these two “forces” (or chemical potentials) are equal is there equilibrium, and the net rate of transfer zero.

pot E = E1kin + E1pot + U1 + E2kin + E2pot + U2 + E12 pot in general lacks an assignment to either The quantity E12 subsystem in a way that is not arbitrary, and this stands in the way of a general non-arbitrary definition of transfer of energy as work. On occasions, authors make their various respective arbitrary assignments.[55]

The distinction between internal and kinetic energy is hard to make in the presence of turbulent motion within the system, as friction gradually dissipates macroscopic kinetic energy of localised bulk flow into molecular random motion The two thermodynamic parameters that form a generof molecules that is classified as internal energy.[56] The alized force-displacement pair are called “conjugate varirate of dissipation by friction of kinetic energy of localised ables”. The two most familiar pairs are, of course, pressurebulk flow into internal energy,[57][58][59] whether in turbuvolume, and temperature-entropy. lent or in streamlined flow, is an important quantity in nonequilibrium thermodynamics. This is a serious difficulty for attempts to define entropy for time-varying spatially inho2.2.7 Spatially inhomogeneous systems mogeneous systems. Classical thermodynamics is initially focused on closed homogeneous systems (e.g. Planck 1897/1903[36] ), which might be regarded as 'zero-dimensional' in the sense that 2.2.8 First law of thermodynamics for open systems they have no spatial variation. But it is desired to study also systems with distinct internal motion and spatial inhomogeneity. For such systems, the principle of conservation of For the first law of thermodynamics, there is no trivial pasenergy is expressed in terms not only of internal energy as sage of physical conception from the closed system view [60][61] For closed systems, the condefined for homogeneous systems, but also in terms of ki- to an open system view. netic energy and potential energies of parts of the inhomo- cepts of an adiabatic enclosure and of an adiabatic wall are geneous system with respect to each other and with respect fundamental. Matter and internal energy cannot permeate to long-range external forces.[53] How the total energy of a or penetrate such a wall. For an open system, there is a system is allocated between these three more specific kinds wall that allows penetration by matter. In general, matter in of energy varies according to the purposes of different writ- diffusive motion carries with it some internal energy, and ers; this is because these components of energy are to some some microscopic potential energy changes accompany the extent mathematical artefacts rather than actually measured motion. An open system is not adiabatically enclosed. physical quantities. For any closed homogeneous compo- There are some cases in which a process for an open system nent of an inhomogeneous closed system, if E denotes the can, for particular purposes, be considered as if it were for a total energy of that component system, one may write closed system. In an open system, by definition hypothetically or potentially, matter can pass between the system and its surroundings. But when, in a particular case, the process of interest involves only hypothetical or potential but no acE = E kin + E pot + U tual passage of matter, the process can be considered as if where E kin and E pot denote respectively the total kinetic it were for a closed system. energy and the total potential energy of the component closed homogeneous system, and U denotes its internal Internal energy for an open system energy.[25][54] Potential energy can be exchanged with the surroundings of Since the revised and more rigorous definition of the interthe system when the surroundings impose a force field, such nal energy of a closed system rests upon the possibility of as gravitational or electromagnetic, on the system. processes by which adiabatic work takes the system from A compound system consisting of two interacting closed one state to another, this leaves a problem for the definition homogeneous component subsystems has a potential energy of internal energy for an open system, for which adiabatic

2.2. FIRST LAW OF THERMODYNAMICS work is not in general possible. According to Max Born, the transfer of matter and energy across an open connection “cannot be reduced to mechanics”.[62] In contrast to the case of closed systems, for open systems, in the presence of diffusion, there is no unconstrained and unconditional physical distinction between convective transfer of internal energy by bulk flow of matter, the transfer of internal energy without transfer of matter (usually called heat conduction and work transfer), and change of various potential energies.[63][64][65] The older traditional way and the conceptually revised (Carathéodory) way agree that there is no physically unique definition of heat and work transfer processes between open systems.[66][67][68][69][70][71] In particular, between two otherwise isolated open systems an adiabatic wall is by definition impossible.[72] This problem is solved by recourse to the principle of conservation of energy. This principle allows a composite isolated system to be derived from two other component non-interacting isolated systems, in such a way that the total energy of the composite isolated system is equal to the sum of the total energies of the two component isolated systems. Two previously isolated systems can be subjected to the thermodynamic operation of placement between them of a wall permeable to matter and energy, followed by a time for establishment of a new thermodynamic state of internal equilibrium in the new single unpartitioned system.[73] The internal energies of the initial two systems and of the final new system, considered respectively as closed systems as above, can be measured.[60] Then the law of conservation of energy requires that ∆Us + ∆Uo = 0 , [74][75] where ΔU and ΔUₒ denote the changes in internal energy of the system and of its surroundings respectively. This is a statement of the first law of thermodynamics for a transfer between two otherwise isolated open systems,[76] that fits well with the conceptually revised and rigorous statement of the law stated above. For the thermodynamic operation of adding two systems with internal energies U 1 and U 2 , to produce a new system with internal energy U, one may write U = U 1 + U 2 ; the reference states for U, U 1 and U 2 should be specified accordingly, maintaining also that the internal energy of a system be proportional to its mass, so that the internal energies are extensive variables.[60][77] There is a sense in which this kind of additivity expresses a fundamental postulate that goes beyond the simplest ideas of classical closed system thermodynamics; the extensivity of some variables is not obvious, and needs explicit expression; indeed one author goes so far as to say that it could be recognized as a fourth law of thermodynamics, though this is not repeated by other authors.[78][79]

53 Also of course ∆Ns + ∆No = 0 , [74][75] where ΔN and ΔNₒ denote the changes in mole number of a component substance of the system and of its surroundings respectively. This is a statement of the law of conservation of mass. Process of transfer of matter between an open system and its surroundings A system connected to its surroundings only through contact by a single permeable wall, but otherwise isolated, is an open system. If it is initially in a state of contact equilibrium with a surrounding subsystem, a thermodynamic process of transfer of matter can be made to occur between them if the surrounding subsystem is subjected to some thermodynamic operation, for example, removal of a partition between it and some further surrounding subsystem. The removal of the partition in the surroundings initiates a process of exchange between the system and its contiguous surrounding subsystem. An example is evaporation. One may consider an open system consisting of a collection of liquid, enclosed except where it is allowed to evaporate into or to receive condensate from its vapor above it, which may be considered as its contiguous surrounding subsystem, and subject to control of its volume and temperature. A thermodynamic process might be initiated by a thermodynamic operation in the surroundings, that mechanically increases in the controlled volume of the vapor. Some mechanical work will be done within the surroundings by the vapor, but also some of the parent liquid will evaporate and enter the vapor collection which is the contiguous surrounding subsystem. Some internal energy will accompany the vapor that leaves the system, but it will not make sense to try to uniquely identify part of that internal energy as heat and part of it as work. Consequently, the energy transfer that accompanies the transfer of matter between the system and its surrounding subsystem cannot be uniquely split into heat and work transfers to or from the open system. The component of total energy transfer that accompanies the transfer of vapor into the surrounding subsystem is customarily called 'latent heat of evaporation', but this use of the word heat is a quirk of customary historical language, not in strict compliance with the thermodynamic definition of transfer of energy as heat. In this example, kinetic energy of bulk flow and potential energy with respect to longrange external forces such as gravity are both considered to be zero. The first law of thermodynamics refers to the change of internal energy of the open system, between its initial and final states of internal equilibrium.

54

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

Open system with multiple contacts An open system can be in contact equilibrium with several other systems at once.[16][80][81][82][83][84][85][86] This includes cases in which there is contact equilibrium between the system, and several subsystems in its surroundings, including separate connections with subsystems through walls that are permeable to the transfer of matter and internal energy as heat and allowing friction of passage of the transferred matter, but immovable, and separate connections through adiabatic walls with others, and separate connections through diathermic walls impermeable to matter with yet others. Because there are physically separate connections that are permeable to energy but impermeable to matter, between the system and its surroundings, energy transfers between them can occur with definite heat and work characters. Conceptually essential here is that the internal energy transferred with the transfer of matter is measured by a variable that is mathematically independent of the variables that measure heat and work.[87] With such independence of variables, the total increase of internal energy in the process is then determined as the sum of the internal energy transferred from the surroundings with the transfer of matter through the walls that are permeable to it, and of the internal energy transferred to the system as heat through the diathermic walls, and of the energy transferred to the system as work through the adiabatic walls, including the energy transferred to the system by long-range forces. These simultaneously transferred quantities of energy are defined by events in the surroundings of the system. Because the internal energy transferred with matter is not in general uniquely resolvable into heat and work components, the total energy transfer cannot in general be uniquely resolved into heat and work components.[88] Under these conditions, the following formula can describe the process in terms of externally defined thermodynamic variables, as a statement of the first law of thermodynamics:

Combination of first and second laws If the system is described by the energetic fundamental equation, U 0 = U 0 (S, V, Nj), and if the process can be described in the quasi-static formalism, in terms of the internal state variables of the system, then the process can also be described by a combination of the first and second laws of thermodynamics, by the formula

(4)

dU0 = T dS − P dV +

µj dNj

j=1

where there are n chemical constituents of the system and permeably connected surrounding subsystems, and where T, S, P, V, Nj, and μj, are defined as above.[89] For a general natural process, there is no immediate termwise correspondence between equations (3) and (4), because they describe the process in different conceptual frames. Nevertheless, a conditional correspondence exists. There are three relevant kinds of wall here: purely diathermal, adiabatic, and permeable to matter. If two of those kinds of wall are sealed off, leaving only one that permits transfers of energy, as work, as heat, or with matter, then the remaining permitted terms correspond precisely. If two of the kinds of wall are left unsealed, then energy transfer can be shared between them, so that the two remaining permitted terms do not correspond precisely. For the special fictive case of quasi-static transfers, there is a simple correspondence.[90] For this, it is supposed that the system has multiple areas of contact with its surroundings. There are pistons that allow adiabatic work, purely diathermal walls, and open connections with surrounding subsystems of completely controllable chemical potential (or equivalent controls for charged species). Then, for a suitable fictive quasi-static transfer, one can write

δQ = T dS and δW = P dV m ∑

n ∑

energy) of transfers quasi-static subsystems

For process, fictive quasi-static transfers for which the defined chemical po∆Ui irreversible), or quasi-static general subsystems, surrounding (suitably tentials in the connected surrounding subsystems are suiti=1 ably controlled, these can be put into equation (4) to yield where ΔU 0 denotes the change of internal energy of the system, and ΔUi denotes the change of internal energy of n the ith of the m surrounding subsystems that are in open ∑ µj dNj transfers) quasi-static subsystems, s contact with the system, due to transfer between the system (5) dU0 = δQ − δW + j=1 and that ith surrounding subsystem, and Q denotes the internal energy transferred as heat from the heat reservoir of [90] does not actually write equation (5), but the surroundings to the system, and W denotes the energy The reference what it does write is fully compatible with it. Another helptransferred from the system to the surrounding subsystems ful account is given by Tschoegl.[91] that are in adiabatic connection with it. The case of a wall that is permeable to matter and can move so as to allow There are several other accounts of this, in apparent mutual conflict.[69][92][93] transfer of energy as work is not considered here. (3)

∆U0 = Q − W −

2.2. FIRST LAW OF THERMODYNAMICS

55

Non-equilibrium transfers

non-conservation of internal energy because of local conversion of kinetic energy of bulk flow to internal energy by The transfer of energy between an open system and a sin- viscosity. gle contiguous subsystem of its surroundings is considered Gyarmati shows that his definition of “the heat flow vector” also in non-equilibrium thermodynamics. The problem of is strictly speaking a definition of flow of internal energy, definition arises also in this case. It may be allowed that not specifically of heat, and so it turns out that his use here the wall between the system and the subsystem is not only of the word heat is contrary to the strict thermodynamic permeable to matter and to internal energy, but also may definition of heat, though it is more or less compatible with be movable so as to allow work to be done when the two historical custom, that often enough did not clearly distinsystems have different pressures. In this case, the transfer guish between heat and internal energy; he writes “that this of energy as heat is not defined. relation must be considered to be the exact definition of Methods for study of non-equilibrium processes mostly deal with spatially continuous flow systems. In this case, the open connection between system and surroundings is usually taken to fully surround the system, so that there are no separate connections impermeable to matter but permeable to heat. Except for the special case mentioned above when there is no actual transfer of matter, which can be treated as if for a closed system, in strictly defined thermodynamic terms, it follows that transfer of energy as heat is not defined. In this sense, there is no such thing as 'heat flow' for a continuous-flow open system. Properly, for closed systems, one speaks of transfer of internal energy as heat, but in general, for open systems, one can speak safely only of transfer of internal energy. A factor here is that there are often cross-effects between distinct transfers, for example that transfer of one substance may cause transfer of another even when the latter has zero chemical potential gradient. Usually transfer between a system and its surroundings applies to transfer of a state variable, and obeys a balance law, that the amount lost by the donor system is equal to the amount gained by the receptor system. Heat is not a state variable. For his 1947 definition of “heat transfer” for discrete open systems, the author Prigogine carefully explains at some length that his definition of it does not obey a balance law. He describes this as paradoxical.[94] The situation is clarified by Gyarmati, who shows that his definition of “heat transfer”, for continuous-flow systems, really refers not specifically to heat, but rather to transfer of internal energy, as follows. He considers a conceptual small cell in a situation of continuous-flow as a system defined in the so-called Lagrangian way, moving with the local center of mass. The flow of matter across the boundary is zero when considered as a flow of total mass. Nevertheless, if the material constitution is of several chemically distinct components that can diffuse with respect to one another, the system is considered to be open, the diffusive flows of the components being defined with respect to the center of mass of the system, and balancing one another as to mass transfer. Still there can be a distinction between bulk flow of internal energy and diffusive flow of internal energy in this case, because the internal energy density does not have to be constant per unit mass of material, and allowing for

the concept of heat flow, fairly loosely used in experimental physics and heat technics.”[95] Apparently in a different frame of thinking from that of the above-mentioned paradoxical usage in the earlier sections of the historic 1947 work by Prigogine, about discrete systems, this usage of Gyarmati is consistent with the later sections of the same 1947 work by Prigogine, about continuous-flow systems, which use the term “heat flux” in just this way. This usage is also followed by Glansdorff and Prigogine in their 1971 text about continuous-flow systems. They write: “Again the flow of internal energy may be split into a convection flow ρuv and a conduction flow. This conduction flow is by definition the heat flow W. Therefore: j[U] = ρuv + W where u denotes the [internal] energy per unit mass. [These authors actually use the symbols E and e to denote internal energy but their notation has been changed here to accord with the notation of the present article. These authors actually use the symbol U to refer to total energy, including kinetic energy of bulk flow.]"[96] This usage is followed also by other writers on non-equilibrium thermodynamics such as Lebon, Jou, and Casas-Vásquez,[97] and de Groot and Mazur.[98] This usage is described by Bailyn as stating the non-convective flow of internal energy, and is listed as his definition number 1, according to the first law of thermodynamics.[70] This usage is also followed by workers in the kinetic theory of gases.[99][100][101] This is not the ad hoc definition of “reduced heat flux” of Haase.[102] In the case of a flowing system of only one chemical constituent, in the Lagrangian representation, there is no distinction between bulk flow and diffusion of matter. Moreover, the flow of matter is zero into or out of the cell that moves with the local center of mass. In effect, in this description, one is dealing with a system effectively closed to the transfer of matter. But still one can validly talk of a distinction between bulk flow and diffusive flow of internal energy, the latter driven by a temperature gradient within the flowing material, and being defined with respect to the local center of mass of the bulk flow. In this case of a virtually closed system, because of the zero matter transfer, as noted above, one can safely distinguish between transfer of energy as work, and transfer of internal energy as heat.[103]

56

2.2.9

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

See also

[16] Carathéodory, C. (1909).

• Laws of thermodynamics

[17] Münster, A. (1970), pp. 23–24.

• Perpetual motion

[18] Reif, F. (1965), p. 122.

• Microstate (statistical mechanics) – includes Microscopic definitions of internal energy, heat and work • Entropy production • Relativistic heat conduction

2.2.10

References

[19] Haase, R. (1971), pp. 24–25. [20] Quantities, Units and Symbols in Physical Chemistry (IUPAC Green Book) See Sec. 2.11 Chemical Thermodynamics [21] Planck, M. (1897/1903). Treatise on Thermodynamics, translated by A. Ogg, Longmans, Green & Co., London., p. 43 [22] Münster, A. (1970).

[1] Truesdell, C. A. (1980).

[23] Kirkwood, J. G., Oppenheim, I. (1961), pp. 31–33.

[2] Hess, H. (1840). “Thermochemische Untersuchungen”. Annalen der Physik und Chemie 126 (6): 385–404. Bibcode:1840AnP...126..385H. doi:10.1002/andp.18401260620.

[24] Planck, M. (1897/1903), p. 86.

[3] Truesdell, C. A. (1980), pp. 157–158.

[25] Crawford, F. H. (1963), pp. 106–107. [26] Bryan, G. H. (1907), p. 47. [27] Buchdahl, H. A. (1966), p. 34.

[4] Mayer, Robert (1841). Paper: 'Remarks on the Forces of Nature"; as quoted in: Lehninger, A. (1971). Bioenergetics – the Molecular Basis of Biological Energy Transformations, 2nd. Ed. London: The Benjamin/Cummings Publishing Company.

[28] Pippard, A. B. (1957/1966), p. 14.

[5] Bailyn, M. (1994), p. 79.

[31] Callen, H. B. (1960/1985), pp. 13, 17.

[6] Clausius, R. (1850), page 373, translation here taken from Truesdell, C. A. (1980), pp. 188–189.

[32] Kittel, C. Kroemer, H. (1980). Thermal Physics, (first edition by Kittel alone 1969), second edition, W. H. Freeman, San Francisco, ISBN 0-7167-1088-9, pp. 49, 227.

[7] Clausius, R. (1850), page 384, equation (IIa.). [8] Bailyn, M. (1994), p. 80. [9] Bryan, G. H. (1907), p.47. Also Bryan had written about this in the Enzyklopädie der Mathematischen Wissenschaften, volume 3, p. 81. Also in 1906 Jean Baptiste Perrin wrote about it in Bull. de la société français de philosophie, volume 6, p. 81. [10] Born, M. (1949), Lecture V, pp. 31–45. [11] Bailyn, M. (1994), pp. 65, 79. [12] Bailyn, (1994), p. 82. [13] Helmholtz, H. (1847). [14] Pippard, A. B. (1957/1966), p. 15. According to Herbert Callen, in his most widely cited text, Pippard’s text gives a “scholarly and rigorous treatment"; see Callen, H. B. (1960/1985), p. 485. It is also recommended by Münster, A. (1970), p. 376. [15] Born, M. (1921). “Kritische Betrachtungen zur traditionellen Darstellung der Thermodynamik”. Physik. Zeitschr 22: 218–224.

[29] Reif, F. (1965), p. 82. [30] Adkins, C. J. (1968/1983), p. 31.

[33] Tro, N. J. (2008). Chemistry. A Molecular Approach, Pearson/Prentice Hall, Upper Saddle River NJ, ISBN 0-13100065-9, p. 246. [34] Kirkwood, J. G., Oppenheim, I. (1961), pp. 17–18. Kirkwood & Oppenheim 1961 is recommended by Münster, A. (1970), p. 376. It is also cited by Eu, B. C. (2002), Generalized Thermodynamics, the Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers, Dordrecht, ISBN 1-4020-0788-4, pp. 18, 29, 66. [35] Guggenheim, E. A. (1949/1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, (first edition 1949), fifth edition 1967, North-Holland, Amsterdam, pp. 9–10. Guggenheim 1949/1965 is recommended by Buchdahl, H. A. (1966), p. 218. It is also recommended by Münster, A. (1970), p. 376. [36] Planck, M. (1897/1903). [37] Kestin, J. (1966), p. 156. [38] Cropper, W. H. (1986). “Rudolf Clausius and the road to entropy”. Am. J. Phys. 54: 1068–1074. Bibcode:1986AmJPh..54.1068C. doi:10.1119/1.14740.

2.2. FIRST LAW OF THERMODYNAMICS

57

[39] Truesdell, C. A. (1980), pp. 161–162.

[60] Münster A. (1970), Sections 14, 15, pp. 45–51.

[40] Buchdahl, H. A. (1966), p. 43.

[61] Landsberg, P. T. (1978), p. 78.

[41] Maxwell, J. C. (1871). Theory of Heat, Longmans, Green, and Co., London, p. 150.

[62] Born, M. (1949), p. 44.

[42] Planck, M. (1897/1903), Section 71, p. 52. [43] Bailyn, M. (1994), p. 95. [44] Adkins, C. J. (1968/1983), p. 35. [45] Atkins, P., de Paula, J. (1978/2010). Physical Chemistry, (first edition 1978), ninth edition 2010, Oxford University Press, Oxford UK, ISBN 978-0-19-954337-3, p. 54. [46] Kondepudi, D. (2008). Introduction to Modern Thermodynamics, Wiley, Chichester, ISBN 978-0-470-01598-8, p. 63. [47] Gislason, E. A.; Craig, N. C. (2005). “Cementing the foundations of thermodynamics:comparison of system-based and surroundings-based definitions of work and heat”. J. Chem. Thermodynamics 37: 954–966. doi:10.1016/j.jct.2004.12.012. [48] Rosenberg, R. M. (2010). “From Joule to Caratheodory and Born: A conceptual evolution of the first law of thermodynamics”. J. Chem. Edu. 87: 691–693. Bibcode:2010JChEd..87..691R. doi:10.1021/ed1001976.

[63] Denbigh, K. G. (1951), p. 56. Denbigh states in a footnote that he is indebted to correspondence with E. A. Guggenheim and with N. K. Adam. From this, Denbigh concludes “It seems, however, that when a system is able to exchange both heat and matter with its environment, it is impossible to make an unambiguous distinction between energy transported as heat and by the migration of matter, without already assuming the existence of the 'heat of transport'.” [64] Fitts, D. D. (1962), p. 28. [65] Denbigh, K. (1954/1971), pp. 81–82. [66] Münster, A. (1970), p. 50. [67] Haase, R. (1963/1969), p. 15. [68] Haase, R. (1971), p. 20. [69] Smith, D. A. (1980). Definition of heat in open systems, Aust. J. Phys., 33: 95–105. [70] Bailyn, M. (1994), p. 308. [71] Balian, R. (1991/2007), p. 217

[49] Partington, J.R. (1949), p. 183: "Rankine calls the curves representing changes without performance of work, adynamics.”

[72] Münster, A. (1970), p. 46.

[50] Denbigh, K. (1954/1981), p. 45.

[74] Callen H. B. (1960/1985), p. 54.

[51] Adkins, C. J. (1968/1983), p. 75.

[75] Tisza, L. (1966), p. 110.

[52] Callen, H. B. (1960/1985), pp. 36, 41, 63.

[76] Tisza, L. (1966), p. 111.

[53] Bailyn, M. (1994), 254–256.

[77] Prigogine, I., (1955/1967), p. 12.

[54] Glansdorff, P., Prigogine, I. (1971), page 8.

[78] Landsberg, P. T. (1961), pp. 142, 387.

[55] Tisza, L. (1966), p. 91.

[79] Landsberg, P. T. (1978), pp. 79,102.

[56] Denbigh, K. G. (1951), p. 50.

[80] Prigogine, I. (1947), p. 48.

[57] Thomson, W. (1852 a). "On a Universal Tendency in Nature to the Dissipation of Mechanical Energy" Proceedings of the Royal Society of Edinburgh for April 19, 1852 [This version from Mathematical and Physical Papers, vol. i, art. 59, pp. 511.]

[73] Tisza, L. (1966), p. 41.

[81] Born, M. (1949), Appendix 8, pp. 146–149. [82] Aston, J. G., Fritz, J. J. (1959), Chapter 9. [83] Kestin, J. (1961).

[58] Thomson, W. (1852 b). On a universal tendency in nature to the dissipation of mechanical energy, Philosophical Magazine 4: 304–306.

[84] Landsberg, P. T. (1961), pp. 128–142.

[59] Helmholtz, H. (1869/1871). Zur Theorie der stationären Ströme in reibenden Flüssigkeiten, Verhandlungen des naturhistorisch-medizinischen Vereins zu Heidelberg, Band V: 1–7. Reprinted in Helmholtz, H. (1882), Wissenschaftliche Abhandlungen, volume 1, Johann Ambrosius Barth, Leipzig, pages 223–230

[86] Tschoegl, N. W. (2000), p. 201.

[85] Tisza, L. (1966), p. 108.

[87] Born, M. (1949), pp. 146–147. [88] Haase, R. (1971), p. 35. [89] Callen, H. B., (1960/1985), p. 35.

58

[90] Aston, J. G., Fritz, J. J. (1959), Chapter 9. This is an unusually explicit account of some of the physical meaning of the Gibbs formalism. [91] Tschoegl, N. W. (2000), pp. 12–14. [92] Buchdahl, H. A. (1966), Section 66, pp. 121–125. [93] Callen, J. B. (1960/1985), Section 2-1, pp. 35–37. [94] Prigogine, I., (1947), pp. 48–49. [95] Gyarmati, I. (1970), p. 68. [96] Glansdorff, P, Prigogine, I, (1971), p. 9. [97] Lebon, G., Jou, D., Casas-Vázquez, J. (2008), p. 45. [98] de Groot, S. R., Mazur, P. (1962), p. 18. [99] de Groot, S. R., Mazur, P. (1962), p. 169. [100] Truesdell, C., Muncaster, R. G. (1980), p. 3. [101] Balescu, R. (1997), p. 9. [102] Haase, R. (1963/1969), p. 18. [103] Eckart, C. (1940).

Cited sources • Adkins, C. J. (1968/1983). Equilibrium Thermodynamics, (first edition 1968), third edition 1983, Cambridge University Press, ISBN 0-521-25445-0. • Aston, J. G., Fritz, J. J. (1959). Thermodynamics and Statistical Thermodynamics, John Wiley & Sons, New York. • Balian, R. (1991/2007). From Microphysics to Macrophysics: Methods and Applications of Statistical Physics, volume 1, translated by D. ter Haar, J.F. Gregg, Springer, Berlin, ISBN 978-3-540-45469-4. • Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 0-88318-797-3. • Born, M. (1949). Natural Philosophy of Cause and Chance, Oxford University Press, London. • Bryan, G. H. (1907). Thermodynamics. An Introductory Treatise dealing mainly with First Principles and their Direct Applications, B. G. Teubner, Leipzig. • Balescu, R. (1997). Statistical Dynamics; Matter out of Equilibrium, Imperial College Press, London, ISBN 978-1-86094-045-3. • Buchdahl, H. A. (1966), The Concepts of Classical Thermodynamics, Cambridge University Press, London.

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS • Callen, H. B. (1960/1985), Thermodynamics and an Introduction to Thermostatistics, (first edition 1960), second edition 1985, John Wiley & Sons, New York, ISBN 0-471-86256-8. • Carathéodory, C. (1909). “Untersuchungen über die Grundlagen der Thermodynamik”. Mathematische Annalen 67: 355–386. doi:10.1007/BF01450409. A translation may be found here. Also a mostly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA. • Clausius, R. (1850), “Part I], [http://gallica. bnf.fr/ark:/12148/bpt6k15164w/f518.table Part II”, Annalen der Physik 79: 368–397, 500–524, Bibcode:1850AnP...155..500C, doi:10.1002/andp.18501550403 External link in |title= (help). See English Translation: On the Moving Force of Heat, and the Laws regarding the Nature of Heat itself which are deducible therefrom. Phil. Mag. (1851), series 4, 2, 1–21, 102–119. Also available on Google Books. • Crawford, F. H. (1963). Heat, Thermodynamics, and Statistical Physics, Rupert Hart-Davis, London, Harcourt, Brace & World, Inc. • de Groot, S. R., Mazur, P. (1962). Nonequilibrium Thermodynamics, North-Holland, Amsterdam. Reprinted (1984), Dover Publications Inc., New York, ISBN 0486647412. • Denbigh, K. G. (1951). The Thermodynamics of the Steady State, Methuen, London, Wiley, New York. • Denbigh, K. (1954/1981). The Principles of Chemical Equilibrium. With Applications in Chemistry and Chemical Engineering, fourth edition, Cambridge University Press, Cambridge UK, ISBN 0-521-23682-7. • Eckart, C. (1940). The thermodynamics of irreversible processes. I. The simple fluid, Phys. Rev. 58: 267–269. • Fitts, D. D. (1962). Nonequilibrium Thermodynamics. Phenomenological Theory of Irreversible Processes in Fluid Systems, McGraw-Hill, New York. • Glansdorff, P., Prigogine, I., (1971). Thermodynamic Theory of Structure, Stability and Fluctuations, Wiley, London, ISBN 0-471-30280-5. • Gyarmati, I. (1967/1970). Non-equilibrium Thermodynamics. Field Theory and Variational Principles, translated from the 1967 Hungarian by E. Gyarmati and W. F. Heinz, Springer-Verlag, New York.

2.2. FIRST LAW OF THERMODYNAMICS • Haase, R. (1963/1969). Thermodynamics of Irreversible Processes, English translation, AddisonWesley Publishing, Reading MA. • Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081. • Helmholtz, H. (1847). Ueber die Erhaltung der Kraft. Eine physikalische Abhandlung, G. Reimer (publisher), Berlin, read on 23 July in a session of the Physikalischen Gesellschaft zu Berlin. Reprinted in Helmholtz, H. von (1882), Wissenschaftliche Abhandlungen, Band 1, J. A. Barth, Leipzig. Translated and edited by J. Tyndall, in Scientific Memoirs, Selected from the Transactions of Foreign Academies of Science and from Foreign Journals. Natural Philosophy (1853), volume 7, edited by J. Tyndall, W. Francis, published by Taylor and Francis, London, pp. 114– 162, reprinted as volume 7 of Series 7, The Sources of Science, edited by H. Woolf, (1966), Johnson Reprint Corporation, New York, and again in Brush, S. G., The Kinetic Theory of Gases. An Anthology of Classic Papers with Historical Commentary, volume 1 of History of Modern Physical Sciences, edited by N. S. Hall, Imperial College Press, London, ISBN 1-86094-347-0, pp. 89–110. • Kestin, J. (1961). “On intersecting isentropics”. Am. J. Phys. 29: 329– 331. Bibcode:1961AmJPh..29..329K. doi:10.1119/1.1937763.

59 • Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green and Co., London. • Pippard, A. B. (1957/1966). Elements of Classical Thermodynamics for Advanced Students of Physics, original publication 1957, reprint 1966, Cambridge University Press, Cambridge UK. • Planck, M.(1897/1903). Treatise on Thermodynamics, translated by A. Ogg, Longmans, Green & Co., London. • Prigogine, I. (1947). Étude Thermodynamique des Phénomènes irréversibles, Dunod, Paris, and Desoers, Liège. • Prigogine, I., (1955/1967). Introduction to Thermodynamics of Irreversible Processes, third edition, Interscience Publishers, New York. • Reif, F. (1965). Fundamentals of Statistical and Thermal Physics, McGraw-Hill Book Company, New York. • Tisza, L. (1966). Generalized Thermodynamics, M.I.T. Press, Cambridge MA. • Truesdell, C. A. (1980). The Tragicomical History of Thermodynamics, 1822–1854, Springer, New York, ISBN 0-387-90403-4.

• Kestin, J. (1966). A Course in Thermodynamics, Blaisdell Publishing Company, Waltham MA.

• Truesdell, C. A., Muncaster, R. G. (1980). Fundamentals of Maxwell’s Kinetic Theory of a Simple Monatomic Gas, Treated as a branch of Rational Mechanics, Academic Press, New York, ISBN 0-12701350-4.

• Kirkwood, J. G., Oppenheim, I. (1961). Chemical Thermodynamics, McGraw-Hill Book Company, New York.

• Tschoegl, N. W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics, Elsevier, Amsterdam, ISBN 0-444-50426-5.

• Landsberg, P. T. (1961). Thermodynamics with Quantum Statistical Illustrations, Interscience, New York. • Landsberg, P. T. (1978). Thermodynamics and Statistical Mechanics, Oxford University Press, Oxford UK, ISBN 0-19-851142-6. • Lebon, G., Jou, D., Casas-Vázquez, J. (2008). Understanding Non-equilibrium Thermodynamics, Springer, Berlin, ISBN 978-3-540-74251-7. • Münster, A. (1970), Classical Thermodynamics, translated by E. S. Halberstadt, Wiley–Interscience, London, ISBN 0-471-62430-6.

2.2.11

Further reading

• Goldstein, Martin, and Inge F. (1993). The Refrigerator and the Universe. Harvard University Press. ISBN 0-674-75325-9. OCLC 32826343. Chpts. 2 and 3 contain a nontechnical treatment of the first law. • Çengel Y. A. and Boles M. (2007). Thermodynamics: an engineering approach. McGraw-Hill Higher Education. ISBN 0-07-125771-3. Chapter 2. • Atkins P. (2007). Four Laws that drive the Universe. OUP Oxford. ISBN 0-19-923236-9.

60

2.2.12

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

External links

of heat to work in a cyclic heat engine operating between two given temperatures.

• MISN-0-158, The First Law of Thermodynamics (PDF file) by Jerzy Borysowicz for Project PHYS2.3.1 NET.

Introduction

Intuitive meaning of the law • First law of thermodynamics in the MIT Course Unified Thermodynamics and Propulsion from Prof. The second law is about thermodynamic systems or bodZ. S. Spakovszky ies of matter and radiation, initially each in its own state of internal thermodynamic equilibrium, and separated from one another by walls that partly or wholly 2.3 Second law of Thermodynamics allow or prevent the passage of matter and energy between them, or make them mutually inaccessible for their [8][9][10][11][12][13] The second law of thermodynamics states that for a ther- constituents. modynamically defined process to actually occur, the sum of the entropies of the participating bodies must increase. In an idealized limiting case, that of a reversible process, this sum remains unchanged. A simplified version of the law states that the flow of heat is from a hotter to a colder body.

The law envisages that the walls are changed by some external agency, making them less restrictive or constraining and more permeable in various ways, and increasing the accessibility, to parts of the overall system, of matter and energy.[14][15][16][17] Thereby a process is defined, establishing new equilibrium states.

A thermodynamically defined process consists of transfers of matter and energy between bodies of matter and radiation, each participating body being initially in its own state of internal thermodynamic equilibrium. The bodies are initially separated from one another by walls that obstruct the passage of matter and energy between them. The transfers are initiated by a thermodynamic operation: some external agency intervenes[1] to make one or more of the walls less obstructive.[2] This establishes new equilibrium states in the bodies. If, instead of making the walls less obstructive, the thermodynamic operation makes them more obstructive, no transfers are occasioned, and there is no effect on an established thermodynamic equilibrium.

The process invariably spreads,[18][19][20][21] disperses,[22] and dissipates[6][23] matter or energy, or both, amongst the bodies. Some energy, inside or outside the system, is degraded in its ability to do work.[24] This is quantitatively described by increase of entropy. It is the consequence of decrease of constraint by a wall, with a corresponding increase in the accessibility, to the parts of the system, of matter and energy. An increase of constraint by a wall has no effect on an established thermodynamic equilibrium.

The law expresses the irreversibility of the process. The transfers invariably bring about spread,[3][4][5] dispersal, or dissipation[6] of matter or energy, or both, amongst the bodies. They occur because more kinds of transfer through the walls have become possible.[7] Irreversibility in thermodynamic processes is a consequence of the asymmetric character of thermodynamic operations, and not of any internally irreversible microscopic properties of the bodies. The second law is an empirical finding that has been accepted as an axiom of thermodynamic theory. When its presuppositions may be only approximately fulfilled, often enough, the law can give a very useful approximation to the observed facts. Statistical thermodynamics, classical or quantum, explains the microscopic origin of the law. The second law has been expressed in many ways. Its first formulation is credited to the French scientist Sadi Carnot in 1824 (see Timeline of thermodynamics). Carnot showed that there is an upper limit to the efficiency of conversion

For an example of the spreading of matter due to increase of accessibility, one may consider a gas initially confined by an impermeable wall to one of two compartments of an isolated system. The wall is then removed. The gas spreads throughout both compartments.[17] The sum of the entropies of the two compartments increases. Reinsertion of the impermeable wall does not change the spread of the gas between the compartments. For an example of the spreading of energy due to increase of accessibility, one may consider a wall impermeable to matter and energy initially separating two otherwise isolated bodies at different temperatures. A thermodynamic operation makes the wall become permeable only to heat, which then passes from the hotter to the colder body, until their temperatures become equal. The sum of the entropies of the two bodies increases. Restoration of the complete impermeability of the wall does not change the equality of the temperatures. The spreading is a change from heterogeneity towards homogeneity. It is the unconstraining of the initial equilibrium that causes the increase of entropy and the change towards homogeneity.[15] The following reasoning offers intuitive understanding of this fact. One may imagine that the freshly unconstrained system, still relatively heterogeneous, imme-

2.3. SECOND LAW OF THERMODYNAMICS diately after the intervention that increased the wall permeability, in its transient condition, arose by spontaneous evolution from an unconstrained previous transient condition of the system. One can then ask, what is the probable such imagined previous condition. The answer is that, overwhelmingly probably, it is just the very same kind of homogeneous condition as that to which the relatively heterogeneous condition will overwhelmingly probably evolve. Obviously, this is possible only in the imagined absence of the constraint that was actually present until its removal. In this light, the reversibility of the dynamics of the evolution of the unconstrained system is evident, in accord with the ordinary laws of microscopic dynamics. It is the removal of the constraint that is effective in causing the change towards homogeneity, not some imagined or apparent “irreversibility” of the laws of spontaneous evolution.[25] This reasoning is of intuitive interest, but is essentially about microstates, and therefore does not belong to macroscopic equilibrium thermodynamics, which studiously ignores consideration of microstates, and non-equilibrium considerations of this kind. It does, however, forestall futile puzzling about some famous proposed “paradoxes”, imagining of a “derivation” of an “arrow of time” from the second law,[26] and meaningless speculation about an imagined “low entropy state” of the early universe.[27]

61 ies, and never the reverse, unless external work is performed on the system. The key concept for the explanation of this phenomenon through the second law of thermodynamics is the definition of a new physical quantity, the entropy.[33][34] For mathematical analysis of processes, entropy is introduced as follows. In a fictive reversible process, an infinitesimal increment in the entropy (dS) of a system results from an infinitesimal transfer of heat (δQ) to a closed system divided by the common temperature (T) of the system and the surroundings which supply the heat:[35]

dS =

δQ T

process) reversible fictive idealized system, (closed.

For an actually possible infinitesimal process without exchange of matter with the surroundings, the second law requires that the increment in system entropy be greater than that:

dS >

δQ T

process). irreversible possible, actually system, (closed

This is because a general process for this case may include work being done on the system by its surroundings, which Though it is more or less intuitive to imagine 'spreadmust have frictional or viscous effects inside the system, ing', such loose intuition is, for many thermodynamic proand because heat transfer actually occurs only irreversibly, cesses, too vague or imprecise to be usefully quantitatively driven by a finite temperature difference.[36][37] informative, because competing possibilities of spreading can coexist, for example due to an increase of some con- The zeroth law of thermodynamics in its usual short statestraint combined with decrease of another. The second ment allows recognition that two bodies in a relation of law justifies the concept of entropy, which makes the no- thermal equilibrium have the same temperature, especially tion of 'spreading' suitably precise, allowing quantitative that a test body has the same temperature as a reference predictions of just how spreading will occur in partic- thermometric body.[38] For a body in thermal equilibrium ular circumstances. It is characteristic of the physical with another, there are indefinitely many empirical temperquantity entropy that it refers to states of thermodynamic ature scales, in general respectively depending on the properties of a particular reference thermometric body. The equilibrium.[28][29][30] second law allows a distinguished temperature scale, which defines an absolute, thermodynamic temperature, independent of the properties of any particular reference thermoGeneral significance of the law metric body.[39][40] The first law of thermodynamics provides the basic definition of thermodynamic energy, also called internal energy, 2.3.2 Various statements of the law associated with all thermodynamic systems, but unknown in classical mechanics, and states the rule of conservation The second law of thermodynamics may be expressed of energy in nature.[31][32] in many specific ways,[41] the most prominent classiThe concept of energy in the first law does not, however, cal statements[42] being the statement by Rudolf Clauaccount for the observation that natural processes have a sius (1854), the statement by Lord Kelvin (1851), and preferred direction of progress. The first law is symmetri- the statement in axiomatic thermodynamics by Constantin cal with respect to the initial and final states of an evolving Carathéodory (1909). These statements cast the law in gensystem. But the second law asserts that a natural process eral physical terms citing the impossibility of certain proruns only in one sense, and is not reversible. For example, cesses. The Clausius and the Kelvin statements have been heat always flows spontaneously from hotter to colder bod- shown to be equivalent.[43]

62

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

Carnot’s principle

It is impossible, by means of inanimate material agency, to derive mechanical effect from any portion of matter by cooling it below the temperature of the coldest of the surrounding objects.[53]

The historical origin of the second law of thermodynamics was in Carnot’s principle. It refers to a cycle of a Carnot heat engine, fictively operated in the limiting mode of extreme slowness known as quasi-static, so that the heat and work transfers are between subsystems that are always in Equivalence of the Clausius and the Kelvin statements their own internal states of thermodynamic equilibrium. The Carnot engine is an idealized device of special interest to engineers who are concerned with the efficiency of heat engines. Carnot’s principle was recognized by Carnot at a time when the caloric theory of heat was seriously considered, before the recognition of the first law of thermodynamics, and before the mathematical expression of the concept of entropy. Interpreted in the light of the first law, Carnot Imagined it is physically equivalent to the second law of thermodyEngine Engine namics, and remains valid today. It states The efficiency of a quasi-static or reversible Carnot cycle depends only on the temperatures of the two heat reservoirs, and is the same, whatever the working substance. A Carnot engine operated in this way is the most efficient possible heat engine using those two temperatures.[44][45][46][47][48][49][50] Derive Kelvin Statement from Clausius Statement

Clausius statement The German scientist Rudolf Clausius laid the foundation for the second law of thermodynamics in 1850 by examining the relation between heat transfer and work.[51] His formulation of the second law, which was published in German in 1854, is known as the Clausius statement: Heat can never pass from a colder to a warmer body without some other change, connected therewith, occurring at the same time.[52]

Suppose there is an engine violating the Kelvin statement: i.e., one that drains heat and converts it completely into work in a cyclic fashion without any other result. Now pair it with a reversed Carnot engine as shown by the figure. The net and sole effect of this newly created engine consisting of the ( two )engines mentioned is transferring heat ∆Q = Q η1 − 1 from the cooler reservoir to the hotter one, which violates the Clausius statement. Thus a violation of the Kelvin statement implies a violation of the Clausius statement, i.e. the Clausius statement implies the Kelvin statement. We can prove in a similar manner that the Kelvin statement implies the Clausius statement, and hence the two are equivalent.

The statement by Clausius uses the concept of 'passage of heat'. As is usual in thermodynamic discussions, this means 'net transfer of energy as heat', and does not refer to conPlanck’s proposition tributory transfers one way and the other. Heat cannot spontaneously flow from cold regions to hot regions without external work being performed on the system, which is evident from ordinary experience of refrigeration, for example. In a refrigerator, heat flows from cold to hot, but only when forced by an external agent, the refrigeration system. Kelvin statement Lord Kelvin expressed the second law as

Planck offered the following proposition as derived directly from experience. This is sometimes regarded as his statement of the second law, but he regarded it as a starting point for the derivation of the second law. It is impossible to construct an engine which will work in a complete cycle, and produce no effect except the raising of a weight and cooling of a heat reservoir.[54][55]

2.3. SECOND LAW OF THERMODYNAMICS

63

Relation between Kelvin’s statement and Planck’s Though it is almost customary in textbooks to say that proposition Carathéodory’s principle expresses the second law and to treat it as equivalent to the Clausius or to the KelvinIt is almost customary in textbooks to speak of the “Kelvin- Planck statements, such is not the case. To get all the Planck statement” of the law, as for example in the text by content of the second law, Carathéodory’s principle needs ter Haar and Wergeland.[56] One text gives a statement very to be supplemented by Planck’s principle, that isochoric like Planck’s proposition, but attributes it to Kelvin with- work always increases the internal energy of a closed sysout mention of Planck.[57] One monograph quotes Planck’s tem that was initially in its own internal thermodynamic proposition as the “Kelvin-Planck” formulation, the text equilibrium.[37][66][67][68] naming Kelvin as its author, though it correctly cites Planck in its references.[58] The reader may compare the two statements quoted just above here. Planck’s Principle Planck’s statement

In 1926, Max Planck wrote an important paper on the basics of thermodynamics.[67][69] He indicated the principle

Planck stated the second law as follows. Every process occurring in nature proceeds in the sense in which the sum of the entropies of all bodies taking part in the process is increased. In the limit, i.e. for reversible processes, the sum of the entropies remains unchanged.[59][60][61]

The internal energy of a closed system is increased by an adiabatic process, throughout the duration of which, the volume of the system remains constant.[37][66]

This formulation does not mention heat and does not mention temperature, nor even entropy, and does not necessarRather like Planck’s statement is that of Uhlenbeck and ily implicitly rely on those concepts, but it implies the conFord for irreversible phenomena. tent of the second law. A closely related statement is that “Frictional pressure never does positive work.”[70] Using a ... in an irreversible or spontaneous now-obsolete form of words, Planck himself wrote: “The change from one equilibrium state to production of heat by friction is irreversible.”[71][72] another (as for example the equalizaNot mentioning entropy, this principle of Planck is stated tion of temperature of two bodies A in physical terms. It is very closely related to the Kelvin and B, when brought in contact) the [62] statement given just above.[73] It is relevant that for a sysentropy always increases. tem at constant volume and mole numbers, the entropy is a monotonic function of the internal energy. Nevertheless, Principle of Carathéodory this principle of Planck is not actually Planck’s preferred statement of the second law, which is quoted above, in a Constantin Carathéodory formulated thermodynamics on previous sub-section of the present section of this present a purely mathematical axiomatic foundation. His state- article, and relies on the concept of entropy. ment of the second law is known as the Principle of A statement that in a sense is complementary to Planck’s Carathéodory, which may be formulated as follows:[63] principle is made by Borgnakke and Sonntag. They do not offer it as a full statement of the second law: In every neighborhood of any state S of an adiabatically enclosed system there are states inaccessible from S.[64] ... there is only one way in which the entropy of a [closed] system can be With this formulation, he described the concept of adiabatic decreased, and that is to transfer heat accessibility for the first time and provided the foundafrom the system.[74] tion for a new subfield of classical thermodynamics, often called geometrical thermodynamics. It follows from Carathéodory’s principle that quantity of energy quasi- Differing from Planck’s just foregoing principle, this one is statically transferred as heat is a holonomic process func- explicitly in terms of entropy change. Of course, removal tion, in other words, δQ = T dS .[65] of matter from a system can also decrease its entropy.

64

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

Statement for a system that has a known expression of Clausius Inequality its internal energy as a function of its extensive state The Clausius theorem (1854) states that in a cyclic process variables The second law has been shown to be equivalent to the I δQ internal energy U being a weakly convex function, when ≤ 0. T written as a function of extensive properties (mass, volume, entropy, ...).[75][76] The equality holds in the reversible case[77] and the '0. So the heat capacity must go to zero at absolute zero

2.4.4

Consequences of the third law

if it has the form of a power law. The same argument shows that it cannot be bounded below by a positive constant, even if we drop the power-law assumption. On the other hand, the molar specific heat at constant volume of a monatomic classical ideal gas, such as helium at room temperature, is given by CV=(3/2)R with R the molar ideal gas constant. But clearly a constant heat capacity does not satisfy Eq. (12). That is, a gas with a constant heat capacity all the way to absolute zero violates the third law of thermodynamics. We can verify this more fundamentally by substituting CV in Eq. (4), which yields In the limit T 0 →0 this expression diverges, again contradicting the third law of thermodynamics.

Fig.1 Left side: Absolute zero can be reached in a finite number of steps if S(0,X1 )≠S(0, X2 ). Right: An infinite number of steps is needed since S(0,X1 )= S(0,X2 ).

Absolute zero The third law is equivalent to the statement that “It is impossible by any procedure, no matter how idealized, to reduce the temperature of any system to zero temperature in a finite number of finite operations”.[7]

The conflict is resolved as follows: At a certain temperature the quantum nature of matter starts to dominate the behavior. Fermi particles follow Fermi–Dirac statistics and Bose particles follow Bose–Einstein statistics. In both cases the heat capacity at low temperatures is no longer temperature independent, even for ideal gases. For Fermi gases with the Fermi temperature TF given by Here NA is Avogadro’s number, V the molar volume, and M the molar mass. For Bose gases with TB given by

The specific heats given by Eq.(14) and (16) both satisfy The reason that T=0 cannot be reached according to the Eq.(12). Indeed, they are power laws with α=1 and α=3/2 third law is explained as follows: Suppose that the temper- respectively. ature of a substance can be reduced in an isentropic process by changing the parameter X from X2 to X1 . One can think of a multistage nuclear demagnetization setup where a Vapor pressure magnetic field is switched on and off in a controlled way.[8] If there were an entropy difference at absolute zero, T=0 The only liquids near absolute zero are ³He and ⁴He. Their could be reached in a finite number of steps. However, at heat of evaporation has a limiting value given by T=0 there is no entropy difference so an infinite number of with L0 and C constant. If we consider a container, partly steps would be needed. The process is illustrated in Fig.1. filled with liquid and partly gas, the entropy of the liquid– gas mixture is where S (T) is the entropy of the liquid and x is the gas fraction. Clearly the entropy change during the liquid–gas tranA non-quantitative description of his third law that Nernst sition (x from 0 to 1) diverges in the limit of T→0. This gave at the very beginning was simply that the specific heat violates Eq.(8). Nature solves this paradox as follows: at Specific heat

82

CHAPTER 2. CHAPTER 2. LAWS OF THERMODYNAMICS

temperatures below about 50 mK the vapor pressure is so low that the gas density is lower than the best vacuum in the universe. In other words: below 50 mK there is simply no gas above the liquid. Latent heat of melting

[4] Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics, New York, ISBN 0-88318-797-3, page 342. [5] Kozliak, Evguenii; Lambert, Frank L. (2008). “Residual Entropy, the Third Law and Latent Heat”. Entropy 10 (3): 274–84. Bibcode:2008Entrp..10..274K. doi:10.3390/e10030274.

The melting curves of ³He and ⁴He both extend down to absolute zero at finite pressure. At the melting pressure liquid and solid are in equilibrium. The third law demands that the entropies of the solid and liquid are equal at T=0. As a result the latent heat of melting is zero and the slope of the melting curve extrapolates to zero as a result of the Clausius–Clapeyron equation.

[6] Reynolds and Perkins (1977). Engineering Thermodynamicsq. McGraw Hill. p. 438. ISBN 0-07-052046-1.

Thermal expansion coefficient

[9] Einstein and the Quantum, A. Douglas Stone, Princeton University Press, 2013.

[7] Guggenheim, E.A. (1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, fifth revised edition, North-Holland Publishing Company, Amsterdam, page 157. [8] F. Pobell, Matter and Methods at Low Temperatures, (Springer-Verlag, Berlin, 2007)

The thermal expansion coefficient is defined as With the Maxwell relation

J. Wilks The Third Law of Thermodynamics Oxford University Press (1961)p 83.

and Eq.(8) with X=p it is shown that So the thermal expansion coefficient of all materials must 2.4.7 go to zero at zero kelvin.

2.4.5

See also

• Adiabatic process • Ground state • Laws of thermodynamics • Quantum thermodynamics • Residual entropy • Thermodynamic entropy • Timeline of thermodynamics, statistical mechanics, and random processes • Quantum refrigerators

2.4.6

References

[1] J. Wilks The Third Law of Thermodynamics Oxford University Press (1961). [2] Kittel and Kroemer, Thermal Physics (2nd ed.), page 49. [3] Wilks, J. (1971). The Third Law of Thermodynamics, Chapter 6 in Thermodynamics, volume 1, ed. W. Jost, of H. Eyring, D. Henderson, W. Jost, Physical Chemistry. An Advanced Treatise, Academic Press, New York, page 477.

Further reading

• Goldstein, Martin & Inge F. (1993) The Refrigerator and the Universe. Cambridge MA: Harvard University Press. ISBN 0-674-75324-0. Chpt. 14 is a nontechnical discussion of the Third Law, one including the requisite elementary quantum mechanics. • Braun, S.; Ronzheimer, J. P.; Schreiber, M.; Hodgman, S. S.; Rom, T.; Bloch, I.; Schneider, U. (2013). “Negative Absolute Temperature for Motional Degrees of Freedom”. Science 339 (6115): 52– 5. arXiv:1211.0545. Bibcode:2013Sci...339...52B. doi:10.1126/science.1227831. PMID 23288533. Lay summary – New Scientist (3 January 2013). • Levy, A.; Alicki, R.; Kosloff, R. (2012). “Quantum refrigerators and the third law of thermodynamics”. Phys. Rev. E 85: 061126. arXiv:1205.1347. Bibcode:2012PhRvE..85f1126L. doi:10.1103/PhysRevE.85.061126.

Chapter 3

Chapter 3. History 3.1 History of thermodynamics

eration. The development of thermodynamics both drove and was driven by atomic theory. It also, albeit in a subtle manner, motivated new directions in probability and statistics; see, for example, the timeline of thermodynamics.

3.1.1

History

See also: Timeline of thermodynamics

Contributions from ancient and medieval times See also: History of heat and Vacuum The ancients viewed heat as that related to fire. In 3000 BC, the ancient Egyptians viewed heat as related to origin mythologies.[1] In the Western philosophical tradition, after much debate about the primal element among earlier pre-Socratic philosophers, Empedocles proposed a fourelement theory, in which all substances derive from earth, water, air, and fire. The Empedoclean element of fire is perhaps the principal ancestor of later concepts such as phlogiston and caloric. Around 500 BC, the Greek philosopher Heraclitus became famous as the “flux and fire” philosopher for his proverbial utterance: “All things are flowing.” Heraclitus argued that the three principal elements in nature were fire, earth, and water.

The 1698 Savery Engine – the world’s first commercially-useful steam engine: built by Thomas Savery

The history of thermodynamics is a fundamental strand in the history of physics, the history of chemistry, and the history of science in general. Owing to the relevance of thermodynamics in much of science and technology, its history is finely woven with the developments of classical mechanics, quantum mechanics, magnetism, and chemical kinetics, to more distant applied fields such as meteorology, information theory, and biology (physiology), and to technological developments such as the steam engine, internal combustion engine, cryogenics and electricity gen-

Atomism is a central part of today’s relationship between thermodynamics and statistical mechanics. Ancient thinkers such as Leucippus and Democritus, and later the Epicureans, by advancing atomism, laid the foundations for the later atomic theory. Until experimental proof of atoms was later provided in the 20th century, the atomic theory was driven largely by philosophical considerations and scientific intuition. The 5th century BC, Greek philosopher Parmenides, in his only known work, a poem conventionally titled On Nature,

83

84

CHAPTER 3. CHAPTER 3. HISTORY To prove this theory, he filled a long glass tube (sealed at one end) with mercury and upended it into a dish also containing mercury. Only a portion of the tube emptied (as shown adjacent); ~30 inches of the liquid remained. As the mercury emptied, and a partial vacuum was created at the top of the tube. The gravitational force on the heavy element Mercury prevented it from filling the vacuum. Transition from chemistry to thermochemistry See also: History of chemistry The theory of phlogiston arose in the 17th century, late

Heating a body, such as a segment of protein alpha helix (above), tends to cause its atoms to vibrate more, and to expand or change phase, if heating is continued; an axiom of nature noted by Herman Boerhaave in the 1700s.

uses verbal reasoning to postulate that a void, essentially what is now known as a vacuum, in nature could not occur. This view was supported by the arguments of Aristotle, but was criticized by Leucippus and Hero of Alexandria. From antiquity to the Middle Ages various arguments were put forward to prove or disapprove the existence of a vacuum and several attempts were made to construct a vacuum but all proved unsuccessful. The European scientists Cornelius Drebbel, Robert Fludd, Galileo Galilei and Santorio Santorio in the 16th and 17th centuries were able to gauge the relative "coldness" or "hotness" of air, using a rudimentary air thermometer (or thermoscope). This may have been influenced by an earlier device which could expand and contract the air constructed by Philo of Byzantium and Hero of Alexandria. Around 1600, the English philosopher and scientist Francis Bacon surmised: “Heat itself, its essence and quiddity is motion and nothing else.” In 1643, Galileo Galilei, while generally accepting the 'sucking' explanation of horror vacui proposed by Aristotle, believed that nature’s vacuumabhorrence is limited. Pumps operating in mines had already proven that nature would only fill a vacuum with water up to a height of ~30 feet. Knowing this curious fact, Galileo encouraged his former pupil Evangelista Torricelli to investigate these supposed limitations. Torricelli did not believe that vacuum-abhorrence (Horror vacui) in the sense of Aristotle’s 'sucking' perspective, was responsible for raising the water. Rather, he reasoned, it was the result of the pressure exerted on the liquid by the surrounding air.

The world’s first ice-calorimeter, used in the winter of 1782-83, by Antoine Lavoisier and Pierre-Simon Laplace, to determine the heat evolved in various chemical changes; calculations which were based on Joseph Black's prior discovery of latent heat. These experiments mark the foundation of thermochemistry.

in the period of alchemy. Its replacement by caloric theory in the 18th century is one of the historical markers of the transition from alchemy to chemistry. Phlogiston was a hypothetical substance that was presumed to be liberated

3.1. HISTORY OF THERMODYNAMICS

85

from combustible substances during burning, and from metals during the process of rusting. Caloric, like phlogiston, was also presumed to be the “substance” of heat that would flow from a hotter body to a cooler body, thus warming it. The first substantial experimental challenges to caloric theory arose in Rumford's 1798 work, when he showed that boring cast iron cannons produced great amounts of heat which he ascribed to friction, and his work was among the first to undermine the caloric theory. The development of the steam engine also focused attention on calorimetry and the amount of heat produced from different types of coal. The first quantitative research on the heat changes during chemical reactions was initiated by Lavoisier using an ice calorimeter following research by Joseph Black on the latent heat of water. More quantitative studies by James Prescott Joule in 1843 onwards provided soundly reproducible phenomena, and helped to place the subject of thermodynamics on a solid footing. William Thomson, for example, was still trying to explain Joule’s observations within a caloric framework as late as 1850. The utility and explanatory power of kinetic theory, however, soon started to displace caloric and it was largely obsolete by the end of the 19th century. Joseph Black and Lavoisier made important contributions in the Robert Boyle. 1627-1691 precise measurement of heat changes using the calorimeter, a subject which became known as thermochemistry. Phenomenological thermodynamics • Boyle’s law (1662) • Charles’s law was first published by Joseph Louis GayLussac in 1802, but he referenced unpublished work by Jacques Charles from around 1787. The relationship had been anticipated by the work of Guillaume Amontons in 1702. • Gay-Lussac’s law (1802) Birth of thermodynamics as science At its origins, thermodynamics was the study of engines. A precursor of the engine was designed by the German scientist Otto von Guericke who, in 1650, designed and built the world’s first vacuum pump and created the world’s first ever vacuum known as the Magdeburg hemispheres. He was driven to make a vacuum in order to disprove Aristotle's long-held supposition that 'Nature abhors a vacuum'.

assumed to be a system of motionless particles, and not interpreted as a system of moving molecules. The concept of thermal motion came two centuries later. Therefore Boyle’s publication in 1660 speaks about a mechanical concept: the air spring.[2] Later, after the invention of the thermometer, the property temperature could be quantified. This tool gave Gay-Lussac the opportunity to derive his law, which led shortly later to the ideal gas law. But, already before the establishment of the ideal gas law, an associate of Boyle’s named Denis Papin built in 1679 a bone digester, which is a closed vessel with a tightly fitting lid that confines steam until a high pressure is generated. Later designs implemented a steam release valve to keep the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and cylinder engine. He did not however follow through with his design. Nevertheless, in 1697, based on Papin’s designs, engineer Thomas Savery built the first engine. Although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time. One such scientist was Sadi Carnot, the “father of thermodynamics”, who in 1824 published Reflections on the Motive Power of Fire, a discourse on heat, power, and engine efficiency. This marks the start of thermodynamics as a modern science.

Shortly thereafter, Irish physicist and chemist Robert Boyle had learned of Guericke’s designs and in 1656, in coordination with English scientist Robert Hooke, built an air pump. Using this pump, Boyle and Hooke noticed the pressurevolume correlation: P.V=constant. In that time, air was Hence, prior to 1698 and the invention of the Savery En-

86

CHAPTER 3. CHAPTER 3. HISTORY

A Watt steam engine, the steam engine that propelled the Industrial Revolution in Britain and the world

gine, horses were used to power pulleys, attached to buckets, which lifted water out of flooded salt mines in England. In the years to follow, more variations of steam engines were built, such as the Newcomen Engine, and later the Watt Engine. In time, these early engines would eventually be utilized in place of horses. Thus, each engine began to be associated with a certain amount of “horse power” depending upon how many horses it had replaced. The main problem with these first engines was that they were slow and clumsy, converting less than 2% of the input fuel into useful Sadi Carnot (1796-1832): the “father” of thermodynamics work. In other words, large quantities of coal (or wood) had to be burned to yield only a small fraction of work output. The name “thermodynamics”, however, did not arrive unHence the need for a new science of engine dynamics was til 1854, when the British mathematician and physicist born. William Thomson (Lord Kelvin) coined the term thermoMost cite Sadi Carnot’s 1824 book Reflections on the Mo- dynamics in his paper On the Dynamical Theory of Heat.[3] tive Power of Fire as the starting point for thermodynamics In association with Clausius, in 1871, the Scottish matheas a modern science. Carnot defined “motive power” to be matician and physicist James Clerk Maxwell formulated a the expression of the useful effect that a motor is capable of new branch of thermodynamics called Statistical Thermodyproducing. Herein, Carnot introduced us to the first modnamics, which functions to analyze large numbers of partiern day definition of "work": weight lifted through a height. cles at equilibrium, i.e., systems where no changes are ocThe desire to understand, via formulation, this useful effect curring, such that only their average properties as temperain relation to “work” is at the core of all modern day therture T, pressure P, and volume V become important. modynamics. Soon thereafter, in 1875, the Austrian physicist Ludwig In 1843, James Joule experimentally found the mechanical Boltzmann formulated a precise connection between enequivalent of heat. In 1845, Joule reported his best-known tropy S and molecular motion: experiment, involving the use of a falling weight to spin a paddle-wheel in a barrel of water, which allowed him to estimate a mechanical equivalent of heat of 819 ft·lbf/Btu S = k log W (4.41 J/cal). This led to the theory of conservation of energy and explained why heat can do work. being defined in terms of the number of possible states [W] In 1850, the famed mathematical physicist Rudolf Clausius such motion could occupy, where k is the Boltzmann’s condefined the term entropy S to be the heat lost or turned into stant. waste, stemming from the Greek word entrepein meaning The following year, 1876, was a seminal point in the development of human thought. During this essential period, to turn.

3.1. HISTORY OF THERMODYNAMICS chemical engineer Willard Gibbs, the first person in America to be awarded a PhD in engineering (Yale), published an obscure 300-page paper titled: On the Equilibrium of Heterogeneous Substances, wherein he formulated one grand equality, the Gibbs free energy equation, which suggested a measure of the amount of “useful work” attainable in reacting systems. Gibbs also originated the concept we now know as enthalpy H, calling it “a heat function for constant pressure”.[4] The modern word enthalpy would be coined many years later by Heike Kamerlingh Onnes,[5] who based it on the Greek word enthalpein meaning to warm.

87 John Herapath later independently formulated a kinetic theory in 1820, but mistakenly associated temperature with momentum rather than vis viva or kinetic energy. His work ultimately failed peer review and was neglected. John James Waterston in 1843 provided a largely accurate account, again independently, but his work received the same reception, failing peer review even from someone as welldisposed to the kinetic principle as Davy.

Further progress in kinetic theory started only in the middle of the 19th century, with the works of Rudolf Clausius, James Clerk Maxwell, and Ludwig Boltzmann. In his 1857 Building on these foundations, those as Lars Onsager, work On the nature of the motion called heat, Clausius for Erwin Schrödinger, and Ilya Prigogine, and others, func- the first time clearly states that heat is the average kinetic tioned to bring these engine “concepts” into the thorough- energy of molecules. This interested Maxwell, who in 1859 fare of almost every modern-day branch of science. derived the momentum distribution later named after him. Boltzmann subsequently generalized his distribution for the case of gases in external fields. Kinetic theory Boltzmann is perhaps the most significant contributor to kinetic theory, as he introduced many of the fundamental conMain article: Kinetic theory of gases cepts in the theory. Besides the Maxwell–Boltzmann distribution mentioned above, he also associated the kinetic The idea that heat is a form of motion is perhaps an ancient energy of particles with their degrees of freedom. The one and is certainly discussed by Francis Bacon in 1620 in Boltzmann equation for the distribution function of a gas his Novum Organum. The first written scientific reflection in non-equilibrium states is still the most effective equaon the microscopic nature of heat is probably to be found tion for studying transport phenomena in gases and metals. in a work by Mikhail Lomonosov, in which he wrote: By introducing the concept of thermodynamic probability as the number of microstates corresponding to the current macrostate, he showed that its logarithm is proportional to "(..) movement should not be denied based on entropy. the fact it is not seen. Who would deny that the leaves of trees move when rustled by a wind, despite it being unobservable from large distances? 3.1.2 Branches of Just as in this case motion remains hidden due to perspective, it remains hidden in warm bodies The following list gives a rough outline as to when the major due to the extremely small sizes of the moving branches of thermodynamics came into inception: particles. In both cases, the viewing angle is so small that neither the object nor their movement • Thermochemistry - 1780s can be seen.” During the same years, Daniel Bernoulli published his book Hydrodynamics (1738), in which he derived an equation for the pressure of a gas considering the collisions of its atoms with the walls of a container. He proves that this pressure is two thirds the average kinetic energy of the gas in a unit volume. Bernoulli’s ideas, however, made little impact on the dominant caloric culture. Bernoulli made a connection with Gottfried Leibniz's vis viva principle, an early formulation of the principle of conservation of energy, and the two theories became intimately entwined throughout their history. Though Benjamin Thompson suggested that heat was a form of motion as a result of his experiments in 1798, no attempt was made to reconcile theoretical and experimental approaches, and it is unlikely that he was thinking of the vis viva principle.

• Classical thermodynamics - 1824 • Chemical thermodynamics - 1876 • Statistical mechanics - c. 1880s • Equilibrium thermodynamics • Engineering thermodynamics • Chemical engineering thermodynamics - c. 1940s • Non-equilibrium thermodynamics - 1941 • Small systems thermodynamics - 1960s • Biological thermodynamics - 1957 • Ecosystem thermodynamics - 1959

88

CHAPTER 3. CHAPTER 3. HISTORY

• Relativistic thermodynamics - 1965 • Quantum thermodynamics - 1968 • Black hole thermodynamics - c. 1970s • Geological thermodynamics - c. 1970s • Biological evolution thermodynamics - 1978 • Geochemical thermodynamics - c. 1980s • Atmospheric thermodynamics - c. 1980s • Natural systems thermodynamics - 1990s • Supramolecular thermodynamics - 1990s • Earthquake thermodynamics - 2000

The phenomenon of heat conduction is immediately grasped in everyday life. In 1701, Sir Isaac Newton published his law of cooling. However, in the 17th century, it came to be believed that all materials had an identical conductivity and that differences in sensation arose from their different heat capacities. Suggestions that this might not be the case came from the new science of electricity in which it was easily apparent that some materials were good electrical conductors while others were effective insulators. Jan Ingen-Housz in 1785-9 made some of the earliest measurements, as did Benjamin Thompson during the same period. The fact that warm air rises and the importance of the phenomenon to meteorology was first realised by Edmund Halley in 1686. Sir John Leslie observed that the cooling effect of a stream of air increased with its speed, in 1804.

Carl Wilhelm Scheele distinguished heat transfer by thermal radiation (radiant heat) from that by convection and conduction in 1777. In 1791, Pierre Prévost showed that • Pharmaceutical systems thermodynamics – 2002 all bodies radiate heat, no matter how hot or cold they are. In 1804, Leslie observed that a matte black surface radiIdeas from thermodynamics have also been applied in other ates heat more effectively than a polished surface, suggestfields, for example: ing the importance of black body radiation. Though it had become to be suspected even from Scheele’s work, in 1831 Macedonio Melloni demonstrated that black body radiation • Thermoeconomics - c. 1970s could be reflected, refracted and polarised in the same way as light. • Drug-receptor thermodynamics - 2001

3.1.3

Entropy and the second law

3.1.4

Heat transfer

James Clerk Maxwell's 1862 insight that both light and radiant heat were forms of electromagnetic wave led to the Main article: History of entropy start of the quantitative analysis of thermal radiation. In 1879, Jožef Stefan observed that the total radiant flux from Even though he was working with the caloric theory, Sadi a blackbody is proportional to the fourth power of its temCarnot in 1824 suggested that some of the caloric avail- perature and stated the Stefan–Boltzmann law. The law was able for generating useful work is lost in any real process. derived theoretically by Ludwig Boltzmann in 1884. In March 1851, while grappling to come to terms with the work of James Prescott Joule, Lord Kelvin started to spec3.1.5 Cryogenics ulate that there was an inevitable loss of useful heat in all processes. The idea was framed even more dramatically by In 1702 Guillaume Amontons introduced the concept of Hermann von Helmholtz in 1854, giving birth to the spectre absolute zero based on observations of gases. In 1810, Sir of the heat death of the universe. John Leslie froze water to ice artificially. The idea of absoIn 1854, William John Macquorn Rankine started to make lute zero was generalised in 1848 by Lord Kelvin. In 1906, use in calculation of what he called his thermodynamic func- Walther Nernst stated the third law of thermodynamics. tion. This has subsequently been shown to be identical to the concept of entropy formulated by Rudolf Clausius in 1865. Clausius used the concept to develop his classic statement 3.1.6 See also of the second law of thermodynamics the same year. • Conservation of energy: Historical development

Main article: Heat transfer

• History of Chemistry • History of Physics • Maxwell’s thermodynamic surface

3.2. AN EXPERIMENTAL ENQUIRY CONCERNING THE SOURCE OF THE HEAT WHICH IS EXCITED BY FRICTION89 • Timeline of thermodynamics, statistical mechanics, and random processes • Thermodynamics

• History of Thermodynamics - ThermodynamicStudy.net • Historical Background of Carnegie-Mellon University

• Timeline of heat engine technology • Timeline of low-temperature technology

3.1.7

• Brief History of Thermodynamics - Berkeley [PDF]

Thermodynamics

-

• History of Thermodynamics - In Pictures

References

[1] J. Gwyn Griffiths (1955). “The Orders of Gods in Greece and Egypt (According to Herodotus)". The Journal of Hellenic Studies 75: 21–23. doi:10.2307/629164. JSTOR 629164.

3.2

An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction

[2] New Experiments physico-mechanicall, Touching the Spring of the Air and its Effects (1660). [3] Thomson, W. (1854). “On the Dynamical Theory of Heat”. Transactions of the Royal Society of Edinburgh 21 (part I): 123. doi:10.1017/s0080456800032014. |chapter= ignored (help) reprinted in Sir William Thomson, LL.D. D.C.L., F.R.S. (1882). Mathematical and Physical Papers 1. London, Cambridge: C.J. Clay, M.A. & Son, Cambridge University Press. p. 232. Hence Thermo-dynamics falls naturally into two Divisions, of which the subjects are respectively, the relation of heat to the forces acting between contiguous parts of bodies, and the relation of heat to electrical agency. [4] Laidler, Keith (1995). The World of Physical Chemistry. Oxford University Press. p. 110. [5] Howard, Irmgard (2002). “H Is for Enthalpy, Thanks to Heike Kamerlingh Onnes and Alfred W. Porter”. Journal of Chemical Education (ACS Publications) 79 (6): 697. Bibcode:2002JChEd..79..697H. doi:10.1021/ed079p697.

3.1.8

Further reading

• Cardwell, D.S.L. (1971). From Watt to Clausius: The Rise of Thermodynamics in the Early Industrial Age. Benjamin Thompson London: Heinemann. ISBN 0-435-54150-1. • Leff, H.S. & Rex, A.F. (eds) (1990). Maxwell’s De- An Experimental Enquiry Concerning the Source of the mon: Entropy, Information and Computing. Bristol: Heat which is Excited by Friction (1798), which was Adam Hilger. ISBN 0-7503-0057-4. published in the Philosophical Transactions of the Royal Society, is a scientific paper by Benjamin Thompson, Count Rumford that provided a substantial challenge to 3.1.9 External links established theories of heat and began the 19th century revolution in thermodynamics. • History of Statistical Mechanics and Thermodynamics - Timeline (1575 to 1980) @ Hyperjeff.net • History of Thermodynamics - University of Waterloo • Thermodynamic Science.com

History

Notes

-

3.2.1

Background

Wolfram- Main article: Caloric theory

90

CHAPTER 3. CHAPTER 3. HISTORY

Rumford was an opponent of the caloric theory of heat which held that heat was a fluid that could be neither created nor destroyed. He had further developed the view that all gases and liquids were absolute non-conductors of heat. His views were out of step with the accepted science of the time and the latter theory had particularly been attacked by John Dalton and John Leslie. Rumford was heavily influenced by the argument from design and it is likely that he wished to grant water a privileged and providential status in the regulation of human life. Though Rumford was to come to associate heat with motion, there is no evidence that he was committed to the kinetic theory or the principle of vis viva. In his 1798 paper, Rumford acknowledged that he had predecessors in the notion that heat was a form of motion. Those predecessors included Francis Bacon, Robert Boyle, Joule’s apparatus for measuring the mechanical equivalent of heat. Robert Hooke, John Locke, and Henry Cavendish. Charles Haldat made some penetrating criticisms of the reproducibility of Rumford’s results and it is possible to see 3.2.2 Experiments the whole experiment as somewhat tendentious. Rumford had observed the frictional heat generated by boring cannon at the arsenal in Munich. Rumford immersed a cannon barrel in water and arranged for a specially blunted boring tool. He showed that the water could be boiled within roughly two and a half hours and that the supply of frictional heat was seemingly inexhaustible. Rumford confirmed that no physical change had taken place in the material of the cannon by comparing the specific heats of the material machined away and that remaining were the same. Rumford argued that the seemingly indefinite generation of heat was incompatible with the caloric theory. He contended that the only thing communicated to the barrel was motion.

However, the experiment inspired the work of James Prescott Joule in the 1840s. Joule’s more exact measurements were pivotal in establishing the kinetic theory at the expense of caloric.

3.2.4

Notes

1. ^ Benjamin Count of Rumford (1798) “An inquiry concerning the source of the heat which is excited by friction,” Philosophical Transactions of the Royal Society of London, 88 : 80–102. doi:10.1098/rstl.1798.0006 2. ^ Cardwell (1971) p.99

Rumford made no attempt to further quantify the heat generated or to measure the mechanical equivalent of heat.

3. ^ Leslie, J. (1804). An Experimental Enquiry into the Nature and Propagation of Heat. London.

3.2.3

4. ^ Rumford (1804) "An enquiry concerning the nature of heat and the mode of its communication" Philosophical Transactions of the Royal Society p.77

Reception

Most established scientists, such as William Henry, as well as Thomas Thomson, believed that there was enough uncertainty in the caloric theory to allow its adaptation to account for the new results. It had certainly proved robust and adaptable up to that time. Furthermore, Thomson, Jöns Jakob Berzelius, and Antoine César Becquerel observed that electricity could be indefinitely generated by friction. No educated scientist of the time was willing to hold that electricity was not a fluid. Ultimately, Rumford’s claim of the “inexhaustible” supply of heat was a reckless extrapolation from the study.

5. ^ Cardwell (1971) pp99-100 6. ^ From p. 100 of Rumford’s paper of 1798: “Before I finish this paper, I would beg leave to observe, that although, in treating the subject I have endeavoured to investigate, I have made no mention of the names of those who have gone over the same ground before me, nor of the success of their labours; this omission has not been owing to any want of respect for my predecessors, but was merely to avoid prolixity, and to be more at liberty to pursue, without interruption, the natural train of my own ideas.”

3.2. AN EXPERIMENTAL ENQUIRY CONCERNING THE SOURCE OF THE HEAT WHICH IS EXCITED BY FRICTION91 7. ^ In his Novum Organum (1620), Francis Bacon concludes that heat is the motion of the particles composing matter. In Francis Bacon, Novum Organum (London, England: William Pickering, 1850), from page 164: " … Heat appears to be Motion.” From p. 165: " … the very essence of Heat, or the Substantial self of Heat, is motion and nothing else, … " From p. 168: " … Heat is not a uniform Expansive Motion of the whole, but of the small particles of the body; … "

Transactions of the Royal Society of London, 73 : 303328. From the footnote continued on p. 313: " … I think Sir Isaac Newton’s opinion, that heat consists in the internal motion of the particles of bodies, much the most probable … " 12. ^ Henry, W. (1802) “A review of some experiments which have been supposed to disprove the materiality of heat”, Manchester Memoirs v, p.603

13. ^ Thomson, T. “Caloric”, Supplement on Chemistry, 8. ^ “Of the mechanical origin of heat and cold” in: Encyclopædia Britannica, 3rd ed. Robert Boyle, Experiments, Notes, &c. About the Mechanical Origine or Production of Divers Particular 14. ^ Haldat, C.N.A (1810) “Inquiries concerning the heat Qualities: … (London, England: E. Flesher (printer), produced by friction”, Journal de Physique lxv, p.213 1675). At the conclusion of Experiment VI, Boyle notes that if a nail is driven completely into a piece of 15. ^ Cardwell (1971) p.102 wood, then further blows with the hammer cause it to become hot as the hammer’s force is transformed into random motion of the nail’s atoms. From pp. 61-62: " 3.2.5 Bibliography … the impulse given by the stroke, being unable either • Cardwell, D.S.L. (1971). From Watt to Clausius: The to drive the nail further on, or destroy its interness [i.e., Rise of Thermodynamics in the Early Industrial Age. entireness, integrity], must be spent in making various Heinemann: London. ISBN 0-435-54150-1. vehement and intestine commotion of the parts among themselves, and in such an one we formerly observed the nature of heat to consist.” 9. ^ “Lectures of Light” (May 1681) in: Robert Hooke with R. Waller, ed., The Posthumous Works of Robert Hooke … (London, England: Samuel Smith and Benjamin Walford, 1705). From page 116: “Now Heat, as I shall afterward prove, is nothing but the internal Motion of the Particles of [a] Body; and the hotter a Body is, the more violently are the Particles moved, …" 10. ^ Sometime during the period 1698-1704, John Locke wrote his book Elements of Natural Philosophy, which was first published in 1720: John Locke with Pierre Des Maizeaux, ed., A Collection of Several Pieces of Mr. John Locke, Never Before Printed, Or Not Extant in His Works (London, England: R. Francklin, 1720). From p. 224: "Heat, is a very brisk agitation of the insensible parts of the object, which produces in us that sensation, from whence we denominate the object hot: so what in our sensation is heat, in the object is nothing but motion. This appears by the way, whereby heat is produc'd: for we see that the rubbing of a brass-nail upon a board, will make it very hot; and the axle-trees of carts and coaches are often hot, and sometimes to a degree, that it sets them on fire, by rubbing of the nave of the wheel upon it.” 11. ^ Henry Cavendish (1783) “Observations on Mr. Hutchins’s experiments for determining the degree of cold at which quicksilver freezes,” Philosophical

Chapter 4

Chapter 4. System State 4.1 Control volume

They therefore apply on volumes. Finding forms of the equation that are independent of the control volumes allows In continuum mechanics and thermodynamics, a control simplification of the integral signs. volume is a mathematical abstraction employed in the process of creating mathematical models of physical processes. 4.1.2 Substantive derivative In an inertial frame of reference, it is a volume fixed in space or moving with constant flow velocity through which the Main article: Material derivative continuum (gas, liquid or solid) flows. The surface enclosing the control volume is referred to as the control surComputations in continuum mechanics often require that face.[1] the regular time derivation operator d/dt is replaced by the At steady state, a control volume can be thought of as an substantive derivative operator D/Dt . This can be seen as arbitrary volume in which the mass of the continuum re- follows. mains constant. As a continuum moves through the control volume, the mass entering the control volume is equal to Consider a bug that is moving through a volume where there the mass leaving the control volume. At steady state, and is some scalar, e.g. pressure, that varies with time and poin the absence of work and heat transfer, the energy within sition: p = p(t, x, y, z) . the control volume remains constant. It is analogous to the If the bug during the time interval from t to t + dt moves classical mechanics concept of the free body diagram. from (x, y, z) to (x + dx, y + dy, z + dz), then the bug experiences a change dp in the scalar value,

4.1.1

Overview dp =

∂p ∂p ∂p ∂p dt + dx + dy + dz ∂t ∂x ∂y ∂z

Typically, to understand how a given physical law applies to the system under consideration, one first begins by consid- (the total differential). If the bug is moving with a velocity ering how it applies to a small, control volume, or “repre- v = (vx , vy , vz ), the change in particle position is vdt = sentative volume”. There is nothing special about a partic- (vx dt, vy dt, vz dt), and we may write ular control volume, it simply represents a small part of the system to which physical laws can be easily applied. This ∂p ∂p ∂p ∂p gives rise to what is termed a volumetric, or volume-wise dt + vx dt + vy dt + vz dt dp = formulation of the mathematical model. ∂t ∂x ∂y ∂z ( ) ∂p ∂p ∂p ∂p One can then argue that since the physical laws behave in a = + vx + vy + vz dt ∂t ∂x ∂y ∂z certain way on a particular control volume, they behave the ( ) same way on all such volumes, since that particular con∂p = + v · ∇p dt. trol volume was not special in any way. In this way, the ∂t corresponding point-wise formulation of the mathematical model can be developed so it can describe the physical be- where ∇p is the gradient of the scalar field p. So: haviour of an entire (and maybe more complex) system. In continuum mechanics the conservation equations (for instance, the Navier-Stokes equations) are in integral form. 92

d ∂ = + v · ∇. dt ∂t

4.2. IDEAL GAS If the bug is just moving with the flow, the same formula applies, but now the velocity vector,v, is that of the flow, u. The last parenthesized expression is the substantive derivative of the scalar pressure. Since the pressure p in this computation is an arbitrary scalar field, we may abstract it and write the substantive derivative operator as

D ∂ = + u · ∇. Dt ∂t

4.1.3

See also

• Continuum mechanics • Cauchy momentum equation • Special relativity • Substantive derivative

4.1.4

References

93 be treated like ideal gases within reasonable tolerances.[1] Generally, a gas behaves more like an ideal gas at higher temperature and lower pressure,[1] as the potential energy due to intermolecular forces becomes less significant compared with the particles’ kinetic energy, and the size of the molecules becomes less significant compared to the empty space between them. The ideal gas model tends to fail at lower temperatures or higher pressures, when intermolecular forces and molecular size become important. It also fails for most heavy gases, such as many refrigerants,[1] and for gases with strong intermolecular forces, notably water vapor. At high pressures, the volume of a real gas is often considerably greater than that of an ideal gas. At low temperatures, the pressure of a real gas is often considerably less than that of an ideal gas. At some point of low temperature and high pressure, real gases undergo a phase transition, such as to a liquid or a solid. The model of an ideal gas, however, does not describe or allow phase transitions. These must be modeled by more complex equations of state. The deviation from the ideal gas behaviour can be described by a dimensionless quantity, the compressibility factor, Z.

The ideal gas model has been explored in both the • James R. Welty, Charles E. Wicks, Robert E. Wilson Newtonian dynamics (as in "kinetic theory") and in & Gregory Rorrer Fundamentals of Momentum, Heat, quantum mechanics (as a "gas in a box"). The ideal gas and Mass Transfer ISBN 0-471-38149-7 model has also been used to model the behavior of electrons in a metal (in the Drude model and the free electron model), and it is one of the most important models in staNotes tistical mechanics. [1] G.J. Van Wylen and R.E. Sonntag (1985), Fundamentals of Classical Thermodynamics, Section 2.1 (3rd edition), John Wiley & Sons, Inc., New York ISBN 0-471-82933-1

4.1.5

External links

• Integral Approach to the Control Volume analysis of Fluid Flow

4.2.1

Types of ideal gas

There are three basic classes of ideal gas: • the classical or Maxwell–Boltzmann ideal gas, • the ideal quantum Bose gas, composed of bosons, and • the ideal quantum Fermi gas, composed of fermions.

4.2 Ideal gas An ideal gas is a theoretical gas composed of many randomly moving point particles that do not interact except when they collide elastically. The ideal gas concept is useful because it obeys the ideal gas law, a simplified equation of state, and is amenable to analysis under statistical mechanics. One mole of an ideal gas has a volume of 22.7 L at STP as defined by IUPAC. At normal conditions such as standard temperature and pressure, most real gases behave qualitatively like an ideal gas. Many gases such as nitrogen, oxygen, hydrogen, noble gases, and some heavier gases like carbon dioxide can

The classical ideal gas can be separated into two types: The classical thermodynamic ideal gas and the ideal quantum Boltzmann gas. Both are essentially the same, except that the classical thermodynamic ideal gas is based on classical statistical mechanics, and certain thermodynamic parameters such as the entropy are only specified to within an undetermined additive constant. The ideal quantum Boltzmann gas overcomes this limitation by taking the limit of the quantum Bose gas and quantum Fermi gas in the limit of high temperature to specify these additive constants. The behavior of a quantum Boltzmann gas is the same as that of a classical ideal gas except for the specification of these constants. The results of the quantum Boltzmann gas are used

94

CHAPTER 4. CHAPTER 4. SYSTEM STATE

in a number of cases including the Sackur–Tetrode equa- gas is a function only of its temperature. For the present tion for the entropy of an ideal gas and the Saha ionization purposes it is convenient to postulate an exemplary version equation for a weakly ionized plasma. of this law by writing:

4.2.2

Classical thermodynamic ideal gas

Macroscopic account

U = cˆV nRT where

• U is the internal energy The ideal gas law is an extension of experimentally discovered gas laws. Real fluids at low density and high • cˆV is the dimensionless specific heat capactemperature approximate the behavior of a classical ideal ity at constant volume, ≈ 3/2 for monatomic gas. However, at lower temperatures or a higher density, a gas, 5/2 for diatomic gas and 3 for more real fluid deviates strongly from the behavior of an ideal gas, complex molecules. particularly as it condenses from a gas into a liquid or as it deposits from a gas into a solid. This deviation is expressed Microscopic model as a compressibility factor. The classical thermodynamic properties of an ideal gas can In order to switch from macroscopic quantities (left hand be described by two equations of state:.[2][3] side of the following equation) to microscopic ones (right hand side), we use One of them is the well known ideal gas law

P V = nRT

nR = N kB

where

where • P is the pressure • V is the volume • n is the amount of substance of the gas (in moles)

• N is the number of gas particles • kB is the Boltzmann constant (1.381×10−23 J·K−1 ).

• R is the gas constant (8.314 J·K−1 mol−1 )

The probability distribution of particles by velocity or energy is given by the Maxwell speed distribution.

• T is the absolute temperature.

The ideal gas model depends on the following assumptions:

This equation is derived from Boyle’s law: V = k/P (at constant T and n); Charles’s law: V = bT (at constant P and n); and Avogadro’s law: V = an (at constant T and P); where • k is a constant used in Boyle’s law • b is a proportionality constant; equal to V /T • a is a proportionality constant; equal to V /n . By combining ( the ) three laws, it would demonstrate ( ) ( Tthat ) n 3V = kba TPn which would mean that V = kba 3 P . ( ) Under ideal conditions, V = R TPn ; that is, P V = nRT . The other equation of state of an ideal gas must express Joule’s law, that the internal energy of a fixed mass of ideal

• The molecules of the gas are indistinguishable, small, hard spheres • All collisions are elastic and all motion is frictionless (no energy loss in motion or collision) • Newton’s laws apply • The average distance between molecules is much larger than the size of the molecules • The molecules are constantly moving in random directions with a distribution of speeds • There are no attractive or repulsive forces between the molecules apart from those that determine their point-like collisions • The only forces between the gas molecules and the surroundings are those that determine the point-like collisions of the molecules with the walls

4.2. IDEAL GAS

95

• In the simplest case, there are no long-range forces between the molecules of the gas and the surroundings.

ideal gas. This is an important step since, according to the theory of thermodynamic potentials, if we can express the entropy as a function of U (U is a thermodynamic potential), volume V and the number of particles N, then we will The assumption of spherical particles is necessary so that have a complete statement of the thermodynamic behavior there are no rotational modes allowed, unlike in a diatomic of the ideal gas. We will be able to derive both the ideal gas gas. The following three assumptions are very related: law and the expression for internal energy from it. molecules are hard, collisions are elastic, and there are no Since the entropy is an exact differential, using the chain inter-molecular forces. The assumption that the space be- rule, the change in entropy when going from a reference tween particles is much larger than the particles themselves state 0 to some other state with entropy S may be written as is of paramount importance, and explains why the ideal gas ∆S where: approximation fails at high pressures.

4.2.3



Heat capacity



S

∆S =

T

dS = S0

T0

(

∂S ∂T

)



V

dT + V

V0

(

∂S ∂V

) dV T

The heat capacity at constant volume, including an ideal gas where the reference variables may be functions of the is: number of particles N. Using the definition of the heat capacity at constant volume for the first differential and the ( ) ( ) appropriate Maxwell relation for the second we have: 1 ∂S 1 ∂U cˆV = T = nR ∂T V nR ∂T V ) ∫ T ∫ V( Cv ∂P where S is the entropy. This is the dimensionless heat ca- ∆S = dT + dV. ∂T V pacity at constant volume, which is generally a function T0 T V0 of temperature due to intermolecular forces. For modExpressing CV in terms of cˆV as developed in the above erate temperatures, the constant for a monatomic gas is section, differentiating the ideal gas equation of state, and cˆV = 3/2 while for a diatomic gas it is cˆV = 5/2 . It is seen integrating yields: that macroscopic measurements on heat capacity provide information on the microscopic structure of the molecules. ( ) ( ) The heat capacity at constant pressure of 1/R mole of ideal T V ∆S = cˆV N k ln + N k ln gas is: T0 V0 1 cˆp = T nR

(

∂S ∂T

) p

1 = nR

(

∂H ∂T

which implies that the entropy may be expressed as:

) = cˆV + 1

(

p

where H = U + pV is the enthalpy of the gas.

S = N k ln

V T cˆv f (N )

)

Sometimes, a distinction is made between an ideal gas, where all constants have been incorporated into the logawhere cˆV and cˆp could vary with temperature, and a perfect rithm as f(N) which is some function of the particle numgas, for which this is not the case. ber N having the same dimensions as V T cˆv in order that The ratio of the constant volume and constant pressure heat the argument of the logarithm be dimensionless. We now impose the constraint that the entropy be extensive. This capacity is will mean that when the extensive parameters (V and N) are multiplied by a constant, the entropy will be multiplied cP by the same constant. Mathematically: γ= cV For air, which is a mixture of gases, this ratio is 1.4. S(T, aV, aN ) = aS(T, V, N ).

4.2.4

Entropy

From this we find an equation for the function f(N)

Using the results of thermodynamics only, we can go a long way in determining the expression for the entropy of an af (N ) = f (aN ).

96

CHAPTER 4. CHAPTER 4. SYSTEM STATE

Differentiating this with respect to a, setting a equal to unity, and then solving the differential equation yields f(N):

S = ln kN

(

V T cˆV NΦ

)

f (N ) = ΦN

The chemical potential of the ideal gas is calculated from the corresponding equation of state (see thermodynamic where Φ may vary for different gases, but will be indepenpotential): dent of the thermodynamic state of the gas. It will have the dimensions of V T cˆv /N . Substituting into the equation for ( ) the entropy: ∂G µ= ∂N T,P ( ) S V T cˆv where G is the Gibbs free energy and is equal to U + P V − = ln . Nk NΦ T S so that: and using the expression for the internal energy of an ideal gas, the entropy may be written: [ ( )cˆv ] S V U 1 = ln Nk N cˆv kN Φ Since this is an expression for entropy in terms of U, V, and N, it is a fundamental equation from which all other properties of the ideal gas may be derived. This is about as far as we can go using thermodynamics alone. Note that the above equation is flawed — as the temperature approaches zero, the entropy approaches negative infinity, in contradiction to the third law of thermodynamics. In the above “ideal” development, there is a critical point, not at absolute zero, at which the argument of the logarithm becomes unity, and the entropy becomes zero. This is unphysical. The above equation is a good approximation only when the argument of the logarithm is much larger than unity — the concept of an ideal gas breaks down at low values of V/N. Nevertheless, there will be a “best” value of the constant in the sense that the predicted entropy is as close as possible to the actual entropy, given the flawed assumption of ideality. A quantum-mechanical derivation of this constant is developed in the derivation of the Sackur–Tetrode equation which expresses the entropy of a monatomic (ˆ cv = 3/2) ideal gas. In the SackurTetrode theory the constant depends only upon the mass of the gas particle. The Sackur–Tetrode equation also suffers from a divergent entropy at absolute zero, but is a good approximation for the entropy of a monatomic ideal gas for high enough temperatures.

4.2.5

Thermodynamic potentials

( ( )) V T cˆV µ(T, V, N ) = kT cˆP − ln NΦ The thermodynamic potentials for an ideal gas can now be written as functions of T, V, and N as:

where, as before, cˆP = cˆV + 1 . The most informative way of writing the potentials is in terms of their natural variables, since each of these equations can be used to derive all of the other thermodynamic variables of the system. In terms of their natural variables, the thermodynamic potentials of a single-species ideal gas are: (

)1/ˆcV N Φ S/N k U (S, V, N ) = cˆV N k e V )) ( ( V T cˆV A(T, V, N ) = N kT cˆV − ln NΦ ( )1/ˆcP P Φ S/N k H(S, P, N ) = cˆP N k e k ( ( cˆP )) kT G(T, P, N ) = N kT cˆP − ln PΦ In statistical mechanics, the relationship between the Helmholtz free energy and the partition function is fundamental, and is used to calculate the thermodynamic properties of matter; see configuration integral for more details.

4.2.6

Speed of sound

Main article: Thermodynamic potential

Main article: Speed of sound

Expressing the entropy as a function of T, V, and N:

The speed of sound in an ideal gas is given by

4.3. REAL GAS

√( csound =

∂P ∂ρ

97 Ideal Bose and Fermi gases



) = s

γP = ρ



γRT M

where γ is the adiabatic index (ˆ cP /ˆ cV )

An ideal gas of bosons (e.g. a photon gas) will be governed by Bose–Einstein statistics and the distribution of energy will be in the form of a Bose–Einstein distribution. An ideal gas of fermions will be governed by Fermi–Dirac statistics and the distribution of energy will be in the form of a Fermi–Dirac distribution.

s is the entropy per particle of the gas. ρ is the mass density of the gas.

4.2.9

See also

P is the pressure of the gas. R is the universal gas constant

• Compressibility factor

T is the temperature

• Dynamical billiards - billiard balls as a model of an ideal gas

M is the molar mass of the gas.

4.2.7

Table of ideal gas equations

See Table of thermodynamic equations: Ideal gas.

4.2.8

Ideal quantum gases

In the above-mentioned Sackur–Tetrode equation, the best choice of the entropy constant was found to be proportional to the quantum thermal wavelength of a particle, and the point at which the argument of the logarithm becomes zero is roughly equal to the point at which the average distance between particles becomes equal to the thermal wavelength. In fact, quantum theory itself predicts the same thing. Any gas behaves as an ideal gas at high enough temperature and low enough density, but at the point where the Sackur– Tetrode equation begins to break down, the gas will begin to behave as a quantum gas, composed of either bosons or fermions. (See the gas in a box article for a derivation of the ideal quantum gases, including the ideal Boltzmann gas.)

• Table of thermodynamic equations • Scale-free ideal gas

4.2.10

References

[1] Cengel, Yunus A.; Boles, Michael A. Thermodynamics: An Engineering Approach (Fourth ed.). p. 89. ISBN 0-07238332-1. [2] Adkins, C.J. (1968/1983). Equilibrium Thermodynamics, (1st edition 1968), third edition 1983, Cambridge University Press, Cambridge UK, ISBN 0-521-25445-0, pp. 116–120. [3] Tschoegl, N.W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics, Elsevier, Amsterdam, ISBN 0-444-50426-5, p 88.

4.3

Real gas

Gases tend to behave as an ideal gas over a wider range of pressures when the temperature reaches the Boyle temper- Real gases are non-hypothetical gases whose molecules ocature. cupy space and have interactions; consequently, they adhere to gas laws. To understand the behaviour of real gases, the following must be taken into account: Ideal Boltzmann gas The ideal Boltzmann gas yields the same results as the classical thermodynamic gas, but makes the following identification for the undetermined constant Φ:

• compressibility effects; • variable specific heat capacity; • van der Waals forces;

T 3/2 Λ3 Φ= g

• non-equilibrium thermodynamic effects;

where Λ is the thermal de Broglie wavelength of the gas and g is the degeneracy of states.

• issues with molecular dissociation and elementary reactions with variable composition

98

CHAPTER 4. CHAPTER 4. SYSTEM STATE

For most applications, such a detailed analysis is unnecessary, and the ideal gas approximation can be used with reasonable accuracy. On the other hand, real-gas models have to be used near the condensation point of gases, near critical points, at very high pressures, to explain the Joule– Thomson effect and in other less usual cases. The deviation from ideality can be described by the compressibility factor Z.

van der Waals model Main article: van der Waals equation Real gases are often modeled by taking into account their molar weight and molar volume ( RT =

4.3.1

Models

a P+ 2 Vm

) (Vm − b)

Where P is the pressure, T is the temperature, R the ideal gas constant, and V the molar volume. a and b are parameters that are determined empirically for each gas, but are sometimes estimated from their critical temperature (T ) and critical pressure (P ) using these relations:

a=

27R2 Tc2 64Pc

b=

RTc 8Pc

Redlich–Kwong model The Redlich–Kwong equation is another two-parameter equation that is used to model real gases. It is almost always more accurate than the van der Waals equation, and often more accurate than some equations with more than two parameters. The equation is ( RT = Isotherms of real gas Dark blue curves – isotherms below the critical temperature. Green sections – metastable states. The section to the left of point F – normal liquid. Point F – boiling point. Line FG – equilibrium of liquid and gaseous phases. Section FA – superheated liquid. Section F′A – stretched liquid (p TTH While Clausius based his definition on a reversible process, there are also irreversible processes that change entropy. So more heat is given up to the cold reservoir than in the Following the second law of thermodynamics, entropy of an Carnot cycle. If we denote the entropies by Sᵢ=Qᵢ/Tᵢ for isolated system always increases. The difference between the two states, then the above inequality can be written as a an isolated system and closed system is that heat may not decrease in the entropy flow to and from an isolated system, but heat flow to and from a closed system is possible. Nevertheless, for both SH − SC < 0 closed and isolated systems, and indeed, also in open systems, irreversible thermodynamics processes may occur. or According SH < SC H to the Clausius equality, for a reversible ∫ cyclic process: δQTrev = 0. This means the line integral L δQTrev In other words, the entropy that leaves the system is greater is path-independent. than the entropy that enters the system, implying that some So we can define a state function S called entropy, which

6.2. ENTROPY satisfies dS =

δQrev T .

To find the entropy difference between any two states of a system, the integral must be evaluated for some reversible path between the initial and final states.[13] Since entropy is a state function, the entropy change of the system for an irreversible path will be the same as for a reversible path between the same two states.[14] However, the entropy change of the surroundings will be different.

133 sure of “disorder” (the higher the entropy, the higher the disorder).[15][16][17] This definition describes the entropy as being proportional to the natural logarithm of the number of possible microscopic configurations of the individual atoms and molecules of the system (microstates) which could give rise to the observed macroscopic state (macrostate) of the system. The constant of proportionality is the Boltzmann constant.

We can only obtain the change of entropy by integrating the Specifically, entropy is a logarithmic measure of the number above formula. To obtain the absolute value of the entropy, of states with significant probability of being occupied: we need the third law of thermodynamics, which states that S = 0 at absolute zero for perfect crystals. ∑ S = −kB pi ln pi , From a macroscopic perspective, in classical thermodyi namics the entropy is interpreted as a state function of a thermodynamic system: that is, a property depending only where kB is the Boltzmann constant, equal to on the current state of the system, independent of how that 1.38065×10−23 J/K. The summation is over all the state came to be achieved. In any process where the system possible microstates of the system, and pi is the probability gives up energy ΔE, and its entropy falls by ΔS, a quantity that the system is in the i-th microstate.[18] This definition at least TR ΔS of that energy must be given up to the sys- assumes that the basis set of states has been picked so tem’s surroundings as unusable heat (TR is the temperature that there is no information on their relative phases. In a of the system’s external surroundings). Otherwise the pro- different basis set, the more general expression is cess will not go forward. In classical thermodynamics, the entropy of a system is defined only if it is in thermodynamic equilibrium. S = −k Tr (b ρ ln(b ρ)), B

Statistical mechanics The statistical definition was developed by Ludwig Boltzmann in the 1870s by analyzing the statistical behavior of the microscopic components of the system. Boltzmann showed that this definition of entropy was equivalent to the thermodynamic entropy to within a constant number which has since been known as Boltzmann’s constant. In summary, the thermodynamic definition of entropy provides the experimental definition of entropy, while the statistical definition of entropy extends the concept, providing an explanation and a deeper understanding of its nature.

where ρb is the density matrix, Tr is trace (linear algebra) and ln is the matrix logarithm. This density matrix formulation is not needed in cases of thermal equilibrium so long as the basis states are chosen to be energy eigenstates. For most practical purposes, this can be taken as the fundamental definition of entropy since all other formulas for S can be mathematically derived from it, but not vice versa.

In what has been called the fundamental assumption of statistical thermodynamics or the fundamental postulate in statistical mechanics, the occupation of any microstate is assumed to be equally probable (i.e. Pi = 1/Ω, where Ω is the number of microstates); this assumption is usually justified for an isolated system in equilibrium.[19] Then the previous The interpretation of entropy in statistical mechanics is the equation reduces to measure of uncertainty, or mixedupness in the phrase of Gibbs, which remains about a system after its observable macroscopic properties, such as temperature, pressure and volume, have been taken into account. For a given set of S = kB ln Ω. macroscopic variables, the entropy measures the degree to which the probability of the system is spread out over dif- In thermodynamics, such a system is one in which the volferent possible microstates. In contrast to the macrostate, ume, number of molecules, and internal energy are fixed which characterizes plainly observable average quantities, (the microcanonical ensemble). a microstate specifies all molecular details about the sys- The most general interpretation of entropy is as a measure tem including the position and velocity of every molecule. of our uncertainty about a system. The equilibrium state The more such states available to the system with apprecia- of a system maximizes the entropy because we have lost all ble probability, the greater the entropy. In statistical me- information about the initial conditions except for the conchanics, entropy is a measure of the number of ways in served variables; maximizing the entropy maximizes our igwhich a system may be arranged, often taken to be a mea- norance about the details of the system.[20] This uncertainty

134

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES .01 psi 100,000 ft^3/lbm .02 psi 50,000 ft^3/lbm

.5 psi 2000 ft^3/lbm 1 psi 1000 ft^3/lbm 2 psi 500 ft^3/lbm

.05 psi 20,000 ft^3/lbm .1 psi 10,000 ft^3/lbm .2 psi 5000 ft^3/lbm

5 psi 200 ft^3/lbm 10 psi 100 ft^3/lbm 20 psi 50 ft^3/lbm

50 psi 20 ft^3/lbm 100 psi 10 ft^3/lbm 200 psi 5 ft^3/lbm

5000 psi .2 ft^3/lbm

500 psi 2 ft^3/lbm 1000 psi 1 ft^3/lbm 2000 psi .5 ft^3/lbm

10,000 psi .1 ft^3/lbm

50,000 psi

2200

20,000 psi .05 ft^3/lbm

100,000 psi .02 ft^3/lbm

is not of the everyday subjective kind, but rather the uncertainty inherent to the experimental method and interpretative model.

2100

2000

1900

1800

1700

supercritical region 1600

1500

1400

temperature, R

The interpretative model has a central role in determining entropy. The qualifier “for a given set of macroscopic variables” above has deep implications: if two observers use different sets of macroscopic variables, they will observe different entropies. For example, if observer A uses the variables U, V and W, and observer B uses U, V, W, X, then, by changing X, observer B can cause an effect that looks like a violation of the second law of thermodynamics to observer A. In other words: the set of macroscopic variables one chooses must include everything that may change in the experiment, otherwise one might see decreasing entropy![21]

1300

vapor region

liquid region 1200

1100

1000

900

800

700

saturated region 600

1.6

entropy, Btu/lbm-R

1.8

2.0

100%

1.4

80%

1.2

90%

1.0

60%

0.8

70%

0.6

40%

0.4

50% quality

0.2

20%

0.0

30%

0%

500

10%

Entropy can be defined for any Markov processes with reversible dynamics and the detailed balance property.

2.2

2.4

2.6

2.8

3.0

3.2

In Boltzmann’s 1896 Lectures on Gas Theory, he showed that this expression gives a measure of entropy for systems A temperature–entropy diagram for steam. The vertical axis repreof atoms and molecules in the gas phase, thus providing a sents uniform temperature, and the horizontal axis represents spemeasure for the entropy of classical thermodynamics. cific entropy. Each dark line on the graph represents constant pres-

Entropy of a system

sure, and these form a mesh with light gray lines of constant volume. (Dark-blue is liquid water, light-blue is liquid-steam mixture, and faint-blue is steam. Grey-blue represents supercritical liquid water.)

mechanics. As an example, for a glass of ice water in air at room temperature, the difference in temperature between a warm room (the surroundings) and cold glass of ice and water (the system and not part of the room), begins to be equalized as portions of the thermal energy from the warm surroundings spread to the cooler system of ice and water. Over time the temperature of the glass and its contents and the temperature of the room become equal. The enSYSTEM tropy of the room has decreased as some of its energy has been dispersed to the ice and water. However, as calculated in the example, the entropy of the system of ice and water has increased more than the entropy of the surrounding room has decreased. In an isolated system such as the room and ice water taken together, the dispersal of energy from warmer to cooler always results in a net increase in enBOUNDARY tropy. Thus, when the “universe” of the room and ice water system has reached a temperature equilibrium, the entropy A thermodynamic system change from the initial state is at a maximum. The entropy of the thermodynamic system is a measure of how far the Entropy is the above-mentioned unexpected and, to some, equalization has progressed. obscure integral that arises directly from the Carnot cycle. Thermodynamic entropy is a non-conserved state funcIt is reversible heat divided by temperature. It is also, re- tion that is of great importance in the sciences of physics markably, a fundamental and very useful function of state. and chemistry.[15][22] Historically, the concept of entropy

SURROUNDINGS

In a thermodynamic system, pressure, density, and temperature tend to become uniform over time because this equilibrium state has higher probability (more possible combinations of microstates) than any other; see statistical

evolved in order to explain why some processes (permitted by conservation laws) occur spontaneously while their time reversals (also permitted by conservation laws) do not; systems tend to progress in the direction of in-

6.2. ENTROPY creasing entropy.[23][24] For isolated systems, entropy never decreases.[22] This fact has several important consequences in science: first, it prohibits "perpetual motion" machines; and second, it implies the arrow of entropy has the same direction as the arrow of time. Increases in entropy correspond to irreversible changes in a system, because some energy is expended as waste heat, limiting the amount of work a system can do.[15][16][25][26] Unlike many other functions of state, entropy cannot be directly observed but must be calculated. Entropy can be calculated for a substance as the standard molar entropy from absolute zero (also known as absolute entropy) or as a difference in entropy from some other reference state which is defined as zero entropy. Entropy has the dimension of energy divided by temperature, which has a unit of joules per kelvin (J/K) in the International System of Units. While these are the same units as heat capacity, the two concepts are distinct.[27] Entropy is not a conserved quantity: for example, in an isolated system with non-uniform temperature, heat might irreversibly flow and the temperature become more uniform such that entropy increases. The second law of thermodynamics, states that a closed system has entropy which may increase or otherwise remain constant. Chemical reactions cause changes in entropy and entropy plays an important role in determining in which direction a chemical reaction spontaneously proceeds.

135 that heat will not flow from a colder body to a hotter body without the application of work (the imposition of order) to the colder body. Secondly, it is impossible for any device operating on a cycle to produce net work from a single temperature reservoir; the production of net work requires flow of heat from a hotter reservoir to a colder reservoir, or a single expanding reservoir undergoing adiabatic cooling, which performs adiabatic work. As a result, there is no possibility of a perpetual motion system. It follows that a reduction in the increase of entropy in a specified process, such as a chemical reaction, means that it is energetically more efficient. It follows from the second law of thermodynamics that the entropy of a system that is not isolated may decrease. An air conditioner, for example, may cool the air in a room, thus reducing the entropy of the air of that system. The heat expelled from the room (the system), which the air conditioner transports and discharges to the outside air, will always make a bigger contribution to the entropy of the environment than will the decrease of the entropy of the air of that system. Thus, the total of entropy of the room plus the entropy of the environment increases, in agreement with the second law of thermodynamics. In mechanics, the second law in conjunction with the fundamental thermodynamic relation places limits on a system’s ability to do useful work.[29] The entropy change of a system at temperature T absorbing an infinitesimal amount of heat δq in a reversible way, is given by δq/T. More explicitly, an energy TR S is not available to do useful work, where TR is the temperature of the coldest accessible reservoir or heat sink external to the system. For further discussion, see Exergy.

One dictionary definition of entropy is that it is “a measure of thermal energy per unit temperature that is not available for useful work”. For instance, a substance at uniform temperature is at maximum entropy and cannot drive a heat engine. A substance at non-uniform temperature is at a lower entropy (than if the heat distribution is allowed to even out) and some of the thermal energy can drive a heat engine. Statistical mechanics demonstrates that entropy is governed A special case of entropy increase, the entropy of mixing, by probability, thus allowing for a decrease in disorder occurs when two or more different substances are mixed. even in an isolated system. Although this is possible, such If the substances are at the same temperature and pres- an event has a small probability of occurring, making it sure, there will be no net exchange of heat or work – the unlikely.[30] entropy change will be entirely due to the mixing of the different substances. At a statistical mechanical level, this results due to the change in available volume per particle 6.2.4 Applications with mixing.[28] The fundamental thermodynamic relation

6.2.3

Second law of thermodynamics

Main article: Second law of thermodynamics The second law of thermodynamics requires that, in general, the total entropy of any system will not decrease other than by increasing the entropy of some other system. Hence, in a system isolated from its environment, the entropy of that system will tend not to decrease. It follows

Main article: Fundamental thermodynamic relation The entropy of a system depends on its internal energy and its external parameters, such as its volume. In the thermodynamic limit, this fact leads to an equation relating the change in the internal energy U to changes in the entropy and the external parameters. This relation is known as the fundamental thermodynamic relation. If external pressure P bears on the volume V as the only external parameter, this relation is:

136

dU = T dS − P dV

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES rection of complex chemical reactions. For such applications, ΔS must be incorporated in an expression that includes both the system and its surroundings, ΔSᵤ ᵢᵥₑᵣ ₑ = ΔS ᵤᵣᵣₒᵤ ᵢ + ΔS ₑ . This expression becomes, via some steps, the Gibbs free energy equation for reactants and products in the system: ΔG [the Gibbs free energy change of the system] = ΔH [the enthalpy change] −T ΔS [the entropy change].[31]

Since both internal energy and entropy are monotonic functions of temperature T, implying that the internal energy is fixed when one specifies the entropy and the volume, this relation is valid even if the change from one state of thermal equilibrium to another with infinitesimally larger entropy and volume happens in a non-quasistatic way (so during this change the system may be very far out of thermal equilib- Entropy balance equation for open systems rium and then the entropy, pressure and temperature may not exist). Heat added Q

The fundamental thermodynamic relation implies many thermodynamic identities that are valid in general, independent of the microscopic details of the system. Important examples are the Maxwell relations and the relations between heat capacities.

Work performed external to boundary Wshaft

Hout

Entropy in chemical thermodynamics Thermodynamic entropy is central in chemical thermodynamics, enabling changes to be quantified and the outcome of reactions predicted. The second law of thermodynamics states that entropy in an isolated system – the combination of a subsystem under study and its surroundings – increases during all spontaneous chemical and physical processes. The Clausius equation of δqᵣₑᵥ/T = ΔS introduces the measurement of entropy change, ΔS. Entropy change describes the direction and quantifies the magnitude of simple changes such as heat transfer between systems – always from hotter to cooler spontaneously. The thermodynamic entropy therefore has the dimension of energy divided by temperature, and the unit joule per kelvin (J/K) in the International System of Units (SI).

Hin System boundary (open)

During steady-state continuous operation, an entropy balance applied to an open system accounts for system entropy changes related to heat flow and mass flow across the system boundary.

In chemical engineering, the principles of thermodynamics are commonly applied to "open systems", i.e. those in which heat, work, and mass flow across the system bound˙ S (shaft ary. Flows of both heat ( Q˙ ) and work, i.e. W work) and P(dV/dt) (pressure-volume work), across the system boundaries, in general cause changes in the entropy of ˙ the system. Transfer as heat entails entropy transfer Q/T, where T is the absolute thermodynamic temperature of the system at the point of the heat flow. If there are mass flows across the system boundaries, they will also influence the total entropy of the system. This account, in terms of heat and work, is valid only for cases in which the work and heat transfers are by paths physically distinct from the paths of entry and exit of matter from the system.[34][35]

Thermodynamic entropy is an extensive property, meaning that it scales with the size or extent of a system. In many processes it is useful to specify the entropy as an intensive property independent of the size, as a specific entropy characteristic of the type of system studied. Specific entropy may be expressed relative to a unit of mass, typically the kilogram (unit: Jkg−1 K−1 ). Alternatively, in chemistry, it is also referred to one mole of substance, in which case it is To derive a generalized entropy balanced equation, we start called the molar entropy with a unit of Jmol−1 K−1 . with the general balance equation for the change in any Thus, when one mole of substance at about 0K is warmed extensive quantity Θ in a thermodynamic system, a quanby its surroundings to 298K, the sum of the incremental val- tity that may be either conserved, such as energy, or nonues of qᵣₑᵥ/T constitute each element’s or compound’s stan- conserved, such as entropy. The basic generic balance exdard molar entropy, an indicator of the amount of energy pression states that dΘ/dt, i.e. the rate of change of Θ in stored by a substance at 298K.[31][32] Entropy change also the system, equals the rate at which Θ enters the system at measures the mixing of substances as a summation of their the boundaries, minus the rate at which Θ leaves the sysrelative quantities in the final mixture.[33] tem across the system boundaries, plus the rate at which Entropy is equally essential in predicting the extent and di- Θ is generated within the system. For an open thermody-

6.2. ENTROPY namic system in which heat and work are transferred by paths separate from the paths for transfer of matter, using this generic balance equation, with respect to the rate of change with time t of the extensive quantity entropy S, the entropy balance equation is:[36][note 2]

137 Cooling and heating For heating or cooling of any system (gas, liquid or solid) at constant pressure from an initial temperature T0 to a final temperature T , the entropy change is

K ∑ Q˙ dS = M˙ k Sˆk + + S˙ gen dt T

∆S = nCP ln

where

provided that the constant-pressure molar heat capacity (or specific heat) CP is constant and that no phase transition occurs in this temperature interval.

k=1

∑K

M˙ k Sˆk = the net rate of entropy flow due to the flows of mass into and out of the system (where Sˆ = entropy per unit mass). k=1

˙ Q T

= the rate of entropy flow due to the flow of heat across the system boundary. S˙ gen = the rate of entropy production within the system. This entropy production arises from processes within the system, including chemical reactions, internal matter diffusion, internal heat transfer, and frictional effects such as viscosity occurring within the system from mechanical work transfer to or from the system.

T T0

Similarly at constant volume, the entropy change is

∆S = nCv ln

T T0

where the constant-volume heat capacity Cᵥ is constant and there is no phase change. At low temperatures near absolute zero, heat capacities of solids quickly drop off to near zero, so the assumption of constant heat capacity does not apply.[38]

Since entropy is a state function, the entropy change of any process in which temperature and volume both vary is the same as for a path divided into two steps - heating at conNote, also, that if there are multiple heat flows, the term ∑ ˙ ˙ Qj /Tj , where Q˙ j is the heat stant volume and expansion at constant temperature. For an Q/T will be replaced by flow and Tj is the temperature at the jth heat flow port into ideal gas, the total entropy change is[39] the system.

6.2.5

∆S = nC ln

T

+ nR ln

V

v Entropy change formulas for simple T0 V0 processes Similarly if the temperature and pressure of an ideal gas

For certain simple transformations in systems of constant composition, the entropy changes are given by simple formulas.[37]

both vary,

∆S = nCP ln

T P − nR ln T0 P0

Isothermal expansion or compression of an ideal gas Phase transitions For the expansion (or compression) of an ideal gas from an initial volume V0 and pressure P0 to a final volume V Reversible phase transitions occur at constant temperature and pressure P at any constant temperature, the change in and pressure. The reversible heat is the enthalpy change for the transition, and the entropy change is the enthalpy entropy is given by: change divided by the thermodynamic temperature. For fusion (melting) of a solid to a liquid at the melting point T , V P the entropy of fusion is ∆S = nR ln = −nR ln . V0 P0 Here n is the number of moles of gas and R is the ideal ∆Hfus gas constant. These equations also apply for expansion into ∆Sfus = . Tm a finite vacuum or a throttling process, where the temperature, internal energy and enthalpy for an ideal gas remain Similarly, for vaporization of a liquid to a gas at the boiling point T , the entropy of vaporization is constant.

138

∆Svap =

6.2.6

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

∆Hvap . Tb

Approaches to understanding entropy

measure of the total amount of “disorder” in the system is given by:[43][44]

Disorder =

CD . CI

Similarly, the total amount of “order” in the system is given As a fundamental aspect of thermodynamics and physics, by: several different approaches to entropy beyond that of Clausius and Boltzmann are valid. CO Order = 1 − . CI Standard textbook definitions In which CD is the “disorder” capacity of the system, which The following is a list of additional definitions of entropy is the entropy of the parts contained in the permitted enfrom a collection of textbooks: semble, CI is the “information” capacity of the system, an expression similar to Shannon’s channel capacity, and CO • a measure of energy dispersal at a specific tempera- is the “order” capacity of the system.[42] ture. • a measure of disorder in the universe or of the avail- Energy dispersal ability of the energy in a system to do work.[40] Main article: Entropy (energy dispersal) • a measure of a system’s thermal energy per unit temperature that is unavailable for doing useful work.[41] The concept of entropy can be described qualitatively as a measure of energy dispersal at a specific temperature.[45] In Boltzmann’s definition, entropy is a measure of the num- Similar terms have been in use from early in the history ber of possible microscopic states (or microstates) of a sys- of classical thermodynamics, and with the development of tem in thermodynamic equilibrium. Consistent with the statistical thermodynamics and quantum theory, entropy Boltzmann definition, the second law of thermodynamics changes have been described in terms of the mixing or needs to be re-worded as such that entropy increases over “spreading” of the total energy of each constituent of a systime, though the underlying principle remains the same. tem over its particular quantized energy levels. Order and disorder Main article: Entropy (order and disorder)

Ambiguities in the terms disorder and chaos, which usually have meanings directly opposed to equilibrium, contribute to widespread confusion and hamper comprehension of entropy for most students.[46] As the second law of thermodynamics shows, in an isolated system internal portions at different temperatures will tend to adjust to a single uniform temperature and thus produce equilibrium. A recently developed educational approach avoids ambiguous terms and describes such spreading out of energy as dispersal, which leads to loss of the differentials required for work even though the total energy remains constant in accordance with the first law of thermodynamics[47] (compare discussion in next section). Physical chemist Peter Atkins, for example, who previously wrote of dispersal leading to a disordered state, now writes that “spontaneous changes are always accompanied by a dispersal of energy”.[48]

Entropy has often been loosely associated with the amount of order or disorder, or of chaos, in a thermodynamic system. The traditional qualitative description of entropy is that it refers to changes in the status quo of the system and is a measure of “molecular disorder” and the amount of wasted energy in a dynamical energy transformation from one state or form to another. In this direction, several recent authors have derived exact entropy formulas to account for and measure disorder and order in atomic and molecular assemblies.[42][43][44] One of the simpler entropy order/disorder formulas is that derived in 1984 by thermodynamic physicist Peter Landsberg, based on a combination of thermodynamics and information theory arguments. He Relating entropy to energy usefulness argues that when constraints operate on a system, such that it is prevented from entering one or more of its possible or Following on from the above, it is possible (in a thermal permitted states, as contrasted with its forbidden states, the context) to regard entropy as an indicator or measure of

6.2. ENTROPY the effectiveness or usefulness of a particular quantity of energy.[49] This is because energy supplied at a high temperature (i.e. with low entropy) tends to be more useful than the same amount of energy available at room temperature. Mixing a hot parcel of a fluid with a cold one produces a parcel of intermediate temperature, in which the overall increase in entropy represents a “loss” which can never be replaced.

139

S = −kB



pi log pi

i

i.e. in such a basis the density matrix is diagonal.

Von Neumann established a rigorous mathematical framework for quantum mechanics with his work Mathematische Grundlagen der Quantenmechanik. He provided in this Thus, the fact that the entropy of the universe is steadily in- work a theory of measurement, where the usual notion of creasing, means that its total energy is becoming less useful: wave function collapse is described as an irreversible proeventually, this will lead to the "heat death of the Universe". cess (the so-called von Neumann or projective measurement). Using this concept, in conjunction with the density matrix he extended the classical concept of entropy into the Entropy and adiabatic accessibility quantum domain. A definition of entropy based entirely on the relation of adiabatic accessibility between equilibrium states was given by E.H.Lieb and J. Yngvason in 1999.[50] This approach has several predecessors, including the pioneering work of Constantin Carathéodory from 1909 [51] and the monograph by R. Giles from 1964.[52] In the setting of Lieb and Yngvason one starts by picking, for a unit amount of the substance under consideration, two reference states X0 and X1 such that the latter is adiabatically accessible from the former but not vice versa. Defining the entropies of the reference states to be 0 and 1 respectively the entropy of a state X is defined as the largest number λ such that X is adiabatically accessible from a composite state consisting of an amount λ in the state X1 and a complementary amount, (1 − λ) , in the state X0 . A simple but important result within this setting is that entropy is uniquely determined, apart from a choice of unit and an additive constant for each chemical element, by the following properties: It is monotonic with respect to the relation of adiabatic accessibility, additive on composite systems, and extensive under scaling. Entropy in quantum mechanics Main article: von Neumann entropy

Information theory I thought of calling it “information”, but the word was overly used, so I decided to call it “uncertainty”. [...] Von Neumann told me, “You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the advantage.” Conversation between Claude Shannon and John von Neumann regarding what name to give to the attenuation in phone-line signals[53] Main articles: Entropy (information theory), Entropy in thermodynamics and information theory and Entropic uncertainty When viewed in terms of information theory, the entropy state function is simply the amount of information (in the Shannon sense) that would be needed to specify the full microstate of the system. This is left unspecified by the macroscopic description.

In information theory, entropy is the measure of the amount In quantum statistical mechanics, the concept of entropy of information that is missing before reception and is somewas developed by John von Neumann and is generally re- times referred to as Shannon entropy.[54] Shannon entropy ferred to as "von Neumann entropy", is a broad and general concept which finds applications in information theory as well as thermodynamics. It was originally devised by Claude Shannon in 1948 to study the amount of information in a transmitted message. The defiS = −kB Tr(ρ log ρ) nition of the information entropy is, however, quite general, where ρ is the density matrix and Tr is the trace operator. and is expressed in terms of a discrete set of probabilities This upholds the correspondence principle, because in the pi so that classical limit, when the phases between the basis states used for the classical probabilities are purely random, this n ∑ expression is equivalent to the familiar classical definition H(X) = − p(xi ) log p(xi ). of entropy, i=1

140

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

In the case of transmitted messages, these probabilities Thermodynamic and statistical mechanics concepts were the probabilities that a particular message was actually transmitted, and the entropy of the message system was • Entropy unit – a non-S.I. unit of thermodynamic ena measure of the average amount of information in a mestropy, usually denoted “e.u.” and equal to one calorie sage. For the case of equal probabilities (i.e. each message per Kelvin per mole, or 4.184 Joules per Kelvin per is equally probable), the Shannon entropy (in bits) is just the mole.[67] number of yes/no questions needed to determine the content of the message.[18] • Gibbs entropy – the usual statistical mechanical entropy of a thermodynamic system. The question of the link between information entropy and thermodynamic entropy is a debated topic. While most authors argue that there is a link between the two,[55][56][57][58][59] a few argue that they have nothing to do with each other.[18] The expressions for the two entropies are similar. If W is the number of microstates that can yield a given macrostate, and each microstate has the same A priori probability, then that probability is p=1/W. The Shannon entropy (in nats) will be:

H=−

W ∑

p log(p) = log(W )

i=1

• Boltzmann entropy – a type of Gibbs entropy, which neglects internal statistical correlations in the overall particle distribution. • Tsallis entropy – a generalization of the standard Boltzmann-Gibbs entropy. • Standard molar entropy – is the entropy content of one mole of substance, under conditions of standard temperature and pressure. • Residual entropy – the entropy present after a substance is cooled arbitrarily close to absolute zero.

and if entropy is measured in units of k per nat, then the entropy is given[60] by:

• Entropy of mixing – the change in the entropy when two different chemical substances or components are mixed.

H = k log(W )

• Loop entropy – is the entropy lost upon bringing together two residues of a polymer within a prescribed distance.

which is the famous Boltzmann entropy formula when k is Boltzmann’s constant, which may be interpreted as the thermodynamic entropy per nat. There are many ways of demonstrating the equivalence of “information entropy” and “physics entropy”, that is, the equivalence of “Shannon entropy” and “Boltzmann entropy”. Nevertheless, some authors argue for dropping the word entropy for the H function of information theory and using Shannon’s other term “uncertainty” instead.[61]

6.2.7

Interdisciplinary applications of entropy

Although the concept of entropy was originally a thermodynamic construct, it has been adapted in other fields of study, including information theory, psychodynamics, thermoeconomics/ecological economics, and evolution.[42][62][63][64][65] For instance, an entropic argument has been recently proposed for explaining the preference of cave spiders in choosing a suitable area for laying their eggs.[66]

• Conformational entropy – is the entropy associated with the physical arrangement of a polymer chain that assumes a compact or globular state in solution. • Entropic force – a microscopic force or reaction tendency related to system organization changes, molecular frictional considerations, and statistical variations. • Free entropy – an entropic thermodynamic potential analogous to the free energy. • Entropic explosion – an explosion in which the reactants undergo a large change in volume without releasing a large amount of heat. • Entropy change – a change in entropy dS between two equilibrium states is given by the heat transferred dQrev divided by the absolute temperature T of the system in this interval. • Sackur-Tetrode entropy – the entropy of a monatomic classical ideal gas determined via quantum considerations.

6.2. ENTROPY The arrow of time Main article: Entropy (arrow of time) Entropy is the only quantity in the physical sciences that seems to imply a particular direction of progress, sometimes called an arrow of time. As time progresses, the second law of thermodynamics states that the entropy of an isolated system never decreases. Hence, from this perspective, entropy measurement is thought of as a kind of clock.

Cosmology Main article: Heat death of the universe

141 The entropy gap is widely believed to have been originally opened up by the early rapid exponential expansion of the universe. Economics See also: Nicholas Georgescu-Roegen § The relevance of thermodynamics to economics and Ecological economics § Methodology Romanian American economist Nicholas GeorgescuRoegen, a progenitor in economics and a paradigm founder of ecological economics, made extensive use of the entropy concept in his magnum opus on The Entropy Law and the Economic Process.[75] Due to Georgescu-Roegen’s work, the laws of thermodynamics now form an integral part of the ecological economics school.[76]:204f [77]:29-35 Although his work was blemished somewhat by mistakes, a full chapter on the economics of Georgescu-Roegen has approvingly been included in one elementary physics textbook on the historical development of thermodynamics.[78]:95-112

Since a finite universe is an isolated system, the Second Law of Thermodynamics states that its total entropy is constantly increasing. It has been speculated, since the 19th century, that the universe is fated to a heat death in which all the energy ends up as a homogeneous distribution of thermal energy, so that no more work can be extracted from any In economics, Georgescu-Roegen’s work has generated the term 'entropy pessimism'.[79]:116 Since the 1990s, leadsource. ing ecological economist and steady-state theorist Herman If the universe can be considered to have generally increasDaly — a student of Georgescu-Roegen — has been the ing entropy, then – as Sir Roger Penrose has pointed out – economists profession’s most influential proponent of the gravity plays an important role in the increase because graventropy pessimism position.[80]:545f ity causes dispersed matter to accumulate into stars, which collapse eventually into black holes. The entropy of a black hole is proportional to the surface area of the black hole’s 6.2.8 See also event horizon.[68] Jacob Bekenstein and Stephen Hawking have shown that black holes have the maximum possible • Autocatalytic reactions and order creation entropy of any object of equal size. This makes them likely • Brownian ratchet end points of all entropy-increasing processes, if they are totally effective matter and energy traps. However, the es• Clausius–Duhem inequality cape of energy from black holes might be possible due to quantum activity, see Hawking radiation. Hawking has re• Configuration entropy cently changed his stance on some details, in a paper which • Departure function largely redefined the event horizons of black holes.[69] The role of entropy in cosmology remains a controversial subject since the time of Ludwig Boltzmann. Recent work has cast some doubt on the heat death hypothesis and the applicability of any simple thermodynamic model to the universe in general. Although entropy does increase in the model of an expanding universe, the maximum possible entropy rises much more rapidly, moving the universe further from the heat death with time, not closer.[70][71][72] This results in an “entropy gap” pushing the system further away from the posited heat death equilibrium.[73] Other complicating factors, such as the energy density of the vacuum and macroscopic quantum effects, are difficult to reconcile with thermodynamical models, making any predictions of largescale thermodynamics extremely difficult.[74]

• Enthalpy • Entropic force • Entropy (information theory) • Entropy (computing) • Entropy and life • Entropy (order and disorder) • Entropy rate • Geometrical frustration • Laws of thermodynamics

142

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

• Multiplicity function • Negentropy (negative entropy) • Orders of magnitude (entropy) • Stirling’s formula • Thermodynamic databases for pure substances • Thermodynamic potential • Wavelet entropy

[12] Clausius, Rudolf (1865). Ueber verschiedene für die Anwendung bequeme Formen der Hauptgleichungen der mechanischen Wärmetheorie: vorgetragen in der naturforsch. Gesellschaft den 24. April 1865. p. 46. [13] Atkins, Peter; Julio De Paula (2006). Physical Chemistry, 8th ed. Oxford University Press. p. 79. ISBN 0-19-8700725. [14] Engel, Thomas; Philip Reid (2006). Physical Chemistry. Pearson Benjamin Cummings. p. 86. ISBN 0-8053-3842X. [15] McGraw-Hill Concise Encyclopedia of Chemistry, 2004

6.2.9

Notes

[1] A machine in this context includes engineered devices as well as biological organisms. [2] The overdots represent derivatives of the quantities with respect to time.

6.2.10

References

[1] “Carnot, Sadi (1796–1832)". Wolfram Research. 2007. Retrieved 2010-02-24. [2] McCulloch, Richard, S. (1876). Treatise on the Mechanical Theory of Heat and its Applications to the Steam-Engine, etc. D. Van Nostrand. [3] Clausius, Rudolf (1850). On the Motive Power of Heat, and on the Laws which can be deduced from it for the Theory of Heat. Poggendorff’s Annalen der Physick, LXXIX (Dover Reprint). ISBN 0-486-59065-8.

[16] Sethna, J. Statistical Mechanics Oxford University Press 2006 p. 78 [17] Barnes & Noble’s Essential Dictionary of Science, 2004 [18] Frigg, R. and Werndl, C. “Entropy – A Guide for the Perplexed”. In Probabilities in Physics; Beisbart C. and Hartmann, S. Eds; Oxford University Press, Oxford, 2010 [19] Schroeder, Daniel V. An Introduction to Thermal Physics. Addison Wesley Longman, 1999, p. 57 [20] “EntropyOrderParametersComplexity.pdf www.physics. cornell.edu" (PDF). Retrieved 2012-08-17. [21] “Jaynes, E. T., “The Gibbs Paradox,” In Maximum Entropy and Bayesian Methods; Smith, C. R; Erickson, G. J; Neudorfer, P. O., Eds; Kluwer Academic: Dordrecht, 1992, pp. 1–22” (PDF). Retrieved 2012-08-17. [22] Sandler S. I., Chemical and Engineering Thermodynamics, 3rd Ed. Wiley, New York, 1999 p. 91

[4] The scientific papers of J. Willard Gibbs in Two Volumes 1. Longmans, Green, and Co. 1906. p. 11. Retrieved 201102-26.

[23] McQuarrie D. A., Simon J. D., Physical Chemistry: A Molecular Approach, University Science Books, Sausalito 1997 p. 817

[5] J. A. McGovern, 2.5 Entropy at the Wayback Machine (archived September 23, 2012)

[24] Haynie, Donald, T. (2001). Biological Thermodynamics. Cambridge University Press. ISBN 0-521-79165-0.

[6] Irreversibility, Entropy Changes, and “Lost Work” Thermodynamics and Propulsion, Z. S. Spakovszky, 2002

[25] Oxford Dictionary of Science, 2005

[7] What is entropy? Thermodynamics of Chemical Equilibrium by S. Lower, 2007

[26] de Rosnay, Joel (1979). The Macroscope – a New World View (written by an M.I.T.-trained biochemist). Harper & Row, Publishers. ISBN 0-06-011029-5.

[8] B. H. Lavenda, “A New Perspective on Thermodynamics” Springer, 2009, Sec. 2.3.4,

[27] J. A. McGovern, Heat Capacities at the Wayback Machine (archived August 19, 2012)

[9] S. Carnot, “Reflexions on the Motive Power of Fire”, translated and annotated by R. Fox, Manchester University Press, 1986, p. 26; C. Truesdell, “The Tragicomical History of Thermodynamics, Springer, 1980, pp. 78–85

[28] Ben-Naim, Arieh, On the So-Called Gibbs Paradox, and on the Real Paradox, Entropy, 9, pp. 132–136, 2007 Link

[10] J. Clerk-Maxwell, “Theory of Heat”, 10th ed. Longmans, Green and Co., 1891, pp. 155–158. [11] R. Clausius, “The Mechanical Theory of Heat”, translated by T. Archer Hirst, van Voorst, 1867, p. 28

[29] Daintith, John (2005). Oxford Dictionary of Physics. Oxford University Press. ISBN 0-19-280628-9. [30] ""Entropy production theorems and some consequences,” Physical Review E; Saha, Arnab; Lahiri, Sourabh; Jayannavar, A. M; The American Physical Society: 14 July 2009, pp. 1–10”. Link.aps.org. Retrieved 2012-08-17.

6.2. ENTROPY

[31] Moore, J. W.; C. L. Stanistski; P. C. Jurs (2005). Chemistry, The Molecular Science. Brooks Cole. ISBN 0-534-42201-2. [32] Jungermann, A.H. (2006). “Entropy and the Shelf Model: A Quantum Physical Approach to a Physical Property”. Journal of Chemical Education 83 (11): 1686–1694. Bibcode:2006JChEd..83.1686J. doi:10.1021/ed083p1686. [33] Levine, I. N. (2002). Physical Chemistry, 5th ed. McGrawHill. ISBN 0-07-231808-2. [34] Born, M. (1949). Natural Philosophy of Cause and Chance, Oxford University Press, London, pp. 44, 146–147. [35] Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ISBN 0122456017, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, p. 35.

143

[49] Sandra Saary (Head of Science, Latifa Girls’ School, Dubai) (23 February 1993). “Book Review of “A Science Miscellany"". Khaleej Times (Galadari Press, UAE): XI. [50] Elliott H. Lieb, Jakob Yngvason: The Physics and Mathematics of the Second Law of Thermodynamics, Phys. Rep. 310, pp. 1–96 (1999) [51] Constantin Carathéodory: Untersuchungen über die Grundlagen der Thermodynamik, Math. Ann., 67, pp. 355–386, 1909 [52] Robin Giles: Mathematical Foundations of Thermodynamics”, Pergamon, Oxford 1964 [53] M. Tribus, E.C. McIrvine, Energy and information, Scientific American, 224 (September 1971), pp. 178–184

[36] Sandler, Stanley, I. (1989). Chemical and Engineering Thermodynamics. John Wiley & Sons. ISBN 0-471-83050-X.

[54] Balian, Roger (2004). “Entropy, a Protean concept”. In Dalibard, Jean. Poincaré Seminar 2003: Bose-Einstein condensation - entropy. Basel: Birkhäuser. pp. 119–144. ISBN 9783764371166.

[37] “GRC.nasa.gov”. GRC.nasa.gov. 2000-03-27. Retrieved 2012-08-17.

[55] Brillouin, Leon (1956). Science and Information Theory. ISBN 0-486-43918-6.

[38] The Third Law Chemistry 433, Stefan Franzen, ncsu.edu

[56] Georgescu-Roegen, Nicholas (1971). The Entropy Law and the Economic Process. Harvard University Press. ISBN 0674-25781-2.

[39] “GRC.nasa.gov”. GRC.nasa.gov. 2008-07-11. Retrieved 2012-08-17. [40] Gribbin’s Q Is for Quantum: An Encyclopedia of Particle Physics, Free Press ISBN 0-684-85578-X, 2000 [41] “Entropy” Encyclopædia Britannica [42] Brooks, Daniel, R.; Wiley, E.O. (1988). Evolution as Entropy– Towards a Unified Theory of Biology. University of Chicago Press. ISBN 0-226-07574-5. [43] Landsberg, P.T. (1984). “Is Equilibrium always an Entropy Maximum?". J. Stat. Physics 35: 159–169. Bibcode:1984JSP....35..159L. doi:10.1007/bf01017372. [44] Landsberg, P.T. (1984). “Can Entropy and “Order” Increase Together?". Physics Letters 102A: 171– 173. Bibcode:1984PhLA..102..171L. doi:10.1016/03759601(84)90934-4. [45] Frank L. Lambert, A Student’s Approach to the Second Law and Entropy [46] Carson, E. M. and J. R. Watson (Department of Educational and Professional Studies, Kings College, London), Undergraduate students’ understandings of entropy and Gibbs Free energy, University Chemistry Education – 2002 Papers, Royal Society of Chemistry [47] Frank L. Lambert, JCE 2002 (79) 187 [Feb] Disorder – A Cracked Crutch for Supporting Entropy Discussions [48] Atkins, Peter (1984). The Second Law. Scientific American Library. ISBN 0-7167-5004-X.

[57] Chen, Jing (2005). The Physical Foundation of Economics – an Analytical Thermodynamic Theory. World Scientific. ISBN 981-256-323-7. [58] Kalinin, M.I.; Kononogov, S.A. (2005). “Boltzmann’s constant”. Measurement Techniques 48: 632–636. doi:10.1007/s11018-005-0195-9. [59] Ben-Naim A. (2008), Entropy Demystified (World Scientific). [60] “Edwin T. Jaynes – Bibliography”. Bayes.wustl.edu. 199803-02. Retrieved 2009-12-06. [61] Schneider, Tom, DELILA system (Deoxyribonucleic acid Library Language), (Information Theory Analysis of binding sites), Laboratory of Mathematical Biology, National Cancer Institute, FCRDC Bldg. 469. Rm 144, P.O. Box. B Frederick, MD 21702-1201, USA [62] Avery, John (2003). Information Theory and Evolution. World Scientific. ISBN 981-238-399-9. [63] Yockey, Hubert, P. (2005). Information Theory, Evolution, and the Origin of Life. Cambridge University Press. ISBN 0-521-80293-8. [64] Chiavazzo, Eliodoro; Fasano, Matteo; Asinari, Pietro. “Inference of analytical thermodynamic models for biological networks”. Physica A: Statistical Mechanics and its Applications 392: 1122–1132. Bibcode:2013PhyA..392.1122C. doi:10.1016/j.physa.2012.11.030.

144

[65] Chen, Jing (2015). The Unity of Science and Economics: A New Foundation of Economic Theory. http://www.springer. com/us/book/9781493934645: Springer. [66] Chiavazzo, Eliodoro; Isaia, Marco; Mammola, Stefano; Lepore, Emiliano; Ventola, Luigi; Asinari, Pietro; Pugno, Nicola Maria. “Cave spiders choose optimal environmental factors with respect to the generated entropy when laying their cocoon”. Scientific Reports 5: 7611. Bibcode:2015NatSR...5E7611C. doi:10.1038/srep07611. [67] IUPAC, Compendium of Chemical Terminology, 2nd ed. (the “Gold Book”) (1997). Online corrected version: (2006–) "Entropy unit". [68] von Baeyer, Christian, H. (2003). Information–the New Language of Science. Harvard University Press. ISBN 0-674-01387-5.Srednicki M (August 1993). “Entropy and area”. Phys. Rev. Lett. 71 (5): 666–669. arXiv:hep-th/9303048. Bibcode:1993PhRvL..71..666S. doi:10.1103/PhysRevLett.71.666. PMID 10055336.Callaway DJE (April 1996). “Surface tension, hydrophobicity, and black holes: The entropic connection”. Phys. Rev. E 53 (4): 3738–3744. arXiv:condmat/9601111. Bibcode:1996PhRvE..53.3738C. doi:10.1103/PhysRevE.53.3738. PMID 9964684. [69] Buchan, Lizzy. “Black holes do not exist, says Stephen Hawking”. Cambridge News. Retrieved 27 January 2014. [70] Layzer, David (1988). Growth of Order in the Universe. MIT Press. [71] Chaisson, Eric J. (2001). Cosmic Evolution: The Rise of Complexity in Nature. Harvard University Press. ISBN 0674-00342-X. [72] Lineweaver, Charles H.; Davies, Paul C. W.; Ruse, Michael, eds. (2013). Complexity and the Arrow of Time. Cambridge University Press. ISBN 978-1-107-02725-1. [73] Stenger, Victor J. (2007). God: The Failed Hypothesis. Prometheus Books. ISBN 1-59102-481-1. [74] Benjamin Gal-Or (1981, 1983, 1987). Cosmology, Physics and Philosophy. Springer Verlag. ISBN 0-387-96526-2. Check date values in: |date= (help) [75] Georgescu-Roegen, Nicholas (1971). The Entropy Law and the Economic Process. (PDF contains only the introductory chapter of the book). Cambridge, Massachusetts: Harvard University Press. ISBN 0674257804. [76] Cleveland, Cutler J.; Ruth, Matthias (1997). “When, where, and by how much do biophysical limits constrain the economic process? A survey of Nicholas GeorgescuRoegen’s contribution to ecological economics” (PDF). Ecological Economics (Amsterdam: Elsevier) 22 (3): 203– 223. doi:10.1016/s0921-8009(97)00079-7. [77] Daly, Herman E.; Farley, Joshua (2011). Ecological Economics. Principles and Applications. (PDF contains full book) (2nd ed.). Washington: Island Press. ISBN 9781597266819.

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

[78] Schmitz, John E.J. (2007). The Second Law of Life: Energy, Technology, and the Future of Earth As We Know It. (Link to the author’s science blog, based on his textbook). Norwich: William Andrew Publishing. ISBN 0815515375. [79] Ayres, Robert U. (2007). “On the practical limits to substitution” (PDF). Ecological Economics (Amsterdam: Elsevier) 61: 115–128. doi:10.1016/j.ecolecon.2006.02.011. [80] Kerschner, Christian (2010). “Economic de-growth vs. steady-state economy” (PDF). Journal of Cleaner Production (Amsterdam: Elsevier) 18: 544–551. doi:10.1016/j.jclepro.2009.10.019.

6.2.11

Further reading

• Atkins, Peter; Julio De Paula (2006). Physical Chemistry, 8th ed. Oxford University Press. ISBN 0-19870072-5. • Baierlein, Ralph (2003). Thermal Physics. Cambridge University Press. ISBN 0-521-65838-1. • Ben-Naim, Arieh (2007). Entropy Demystified. World Scientific. ISBN 981-270-055-2. • Callen, Herbert, B (2001). Thermodynamics and an Introduction to Thermostatistics, 2nd Ed. John Wiley and Sons. ISBN 0-471-86256-8. • Chang, Raymond (1998). Chemistry, 6th Ed. New York: McGraw Hill. ISBN 0-07-115221-0. • Cutnell, John, D.; Johnson, Kenneth, J. (1998). Physics, 4th ed. John Wiley and Sons, Inc. ISBN 0471-19113-2. Cite uses deprecated parameter |coauthor= (help) • Dugdale, J. S. (1996). Entropy and its Physical Meaning (2nd ed.). Taylor and Francis (UK); CRC (US). ISBN 0-7484-0569-0. • Fermi, Enrico (1937). Thermodynamics. Prentice Hall. ISBN 0-486-60361-X. • Goldstein, Martin; Inge, F (1993). The Refrigerator and the Universe. Harvard University Press. ISBN 0674-75325-9. • Gyftopoulos, E.P.; G.P. Beretta (1991, 2005, 2010). Thermodynamics. Foundations and Applications. Dover. ISBN 0-486-43932-1. Check date values in: |date= (help) • Haddad, Wassim M.; Chellaboina, VijaySekhar; Nersesov, Sergey G. (2005). Thermodynamics – A Dynamical Systems Approach. Princeton University Press. ISBN 0-691-12327-6.

6.3. PRESSURE

145

• Kroemer, Herbert; Charles Kittel (1980). Thermal Physics (2nd ed.). W. H. Freeman Company. ISBN 0-7167-1088-9.

• The Discovery of Entropy by Adam Shulman. Hourlong video, January 2013.

• Lambert, Frank L.; entropysite.oxy.edu

• Moriarty, Philip; Merrifield, Michael (2009). “S Entropy”. Sixty Symbols. Brady Haran for the University of Nottingham.

• Penrose, Roger (2005). The Road to Reality: A Complete Guide to the Laws of the Universe. New York: A. A. Knopf. ISBN 0-679-45443-8.

• Entropy Scholarpedia

• Reif, F. (1965). Fundamentals of statistical and thermal physics. McGraw-Hill. ISBN 0-07-051800-9. • Schroeder, Daniel V. (2000). Introduction to Thermal Physics. New York: Addison Wesley Longman. ISBN 0-201-38027-7.

6.3

Pressure

This article is about pressure in the physical sciences. For • Serway, Raymond, A. (1992). Physics for Scientists other uses, see Pressure (disambiguation). Pressure (symbol: p or P) is the force applied perpenand Engineers. Saunders Golden Subburst Series. ISBN 0-03-096026-6. • Spirax-Sarco Limited, Entropy – A Basic Understanding A primer on entropy tables for steam engineering • vonBaeyer; Hans Christian (1998). Maxwell’s Demon: Why Warmth Disperses and Time Passes. Random House. ISBN 0-679-43342-2. • Entropy for beginners – a wikibook • An Intuitive Guide to the Concept of Entropy Arising in Various Sectors of Science – a wikibook

6.2.12

External links

• Entropy and the Second Law of Thermodynamics - an A-level physics lecture with detailed derivation of entropy based on Carnot cycle • Khan Academy: entropy lectures, part of Chemistry playlist

Pressure as exerted by particle collisions inside a closed container.

dicular to the surface of an object per unit area over which that force is distributed. Gauge pressure (also spelled gage [lower-alpha 1] pressure) is the pressure relative to the ambient • Thermodynamic Entropy Definition Clarificapressure. tion • Reconciling Thermodynamic and State Defini- Various units are used to express pressure. Some of these derive from a unit of force divided by a unit of area; the SI tions of Entropy unit of pressure, the pascal (Pa), for example, is one newton • Entropy Intuition per square metre; similarly, the pound-force per square inch • More on Entropy (psi) is the traditional unit of pressure in the imperial and • The Second Law of Thermodynamics and Entropy - US customary systems. Pressure may also be expressed Yale OYC lecture, part of Fundamentals of Physics I in terms of standard atmospheric pressure; the atmosphere (atm) is equal to this pressure and the torr is defined as 1 ⁄760 (PHYS 200) of this. Manometric units such as the centimetre of wa• Entropy and the Clausius inequality MIT OCW lec- ter, millimetre of mercury, and inch of mercury are used ture, part of 5.60 Thermodynamics & Kinetics, Spring to express pressures in terms of the height of column of a particular fluid in a manometer. 2008 • Proof: S (or Entropy) is a valid state variable

146

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

6.3.1

Definition

points outward. The equation has meaning in that, for any surface S in contact with the fluid, the total force exerted by Pressure is the amount of force acting per unit area. The the fluid on that surface is the surface integral over S of the symbol for pressure is p or P.[1] The IUPAC recommenda- right-hand side of the above equation. tion for pressure is a lower-case p.[2] However, upper-case It is incorrect (although rather usual) to say “the pressure P is widely used. The usage of P vs p depends on the field is directed in such or such direction”. The pressure, as a in which one is working, on the nearby presence of other scalar, has no direction. The force given by the previous symbols for quantities such as power and momentum, and relationship to the quantity has a direction, but the preson writing style. sure does not. If we change the orientation of the surface element, the direction of the normal force changes accordingly, but the pressure remains the same. Formula Pressure is transmitted to solid boundaries or across arbitrary sections of fluid normal to these boundaries or sections at every point. It is a fundamental parameter in thermodynamics, and it is conjugate to volume. Units

Mathematically:

p=

F A

where: p is the pressure, F is the normal force, A is the area of the surface on contact. Pressure is a scalar quantity. It relates the vector surface element (a vector normal to the surface) with the normal force acting on it. The pressure is the scalar proportionality Mercury column constant that relates the two normal vectors: The SI unit for pressure is the pascal (Pa), equal to one newton per square metre (N/m2 or kg·m−1 ·s−2 ). This name dFn = −p dA = −p n dA for the unit was added in 1971;[3] before that, pressure in SI The minus sign comes from the fact that the force is consid- was expressed simply in newtons per square metre. ered towards the surface element, while the normal vector Other units of pressure, such as pounds per square inch and

6.3. PRESSURE bar, are also in common use. The CGS unit of pressure is the barye (Ba), equal to 1 dyn·cm−2 or 0.1 Pa. Pressure is sometimes expressed in grams-force or kilograms-force per square centimetre (g/cm2 or kg/cm2 ) and the like without properly identifying the force units. But using the names kilogram, gram, kilogram-force, or gram-force (or their symbols) as units of force is expressly forbidden in SI. The technical atmosphere (symbol: at) is 1 kgf/cm2 (98.0665 kPa or 14.223 psi). Since a system under pressure has potential to perform work on its surroundings, pressure is a measure of potential energy stored per unit volume. It is therefore related to energy density and may be expressed in units such as joules per cubic metre (J/m3 , which is equal to Pa). Some meteorologists prefer the hectopascal (hPa) for atmospheric air pressure, which is equivalent to the older unit millibar (mbar). Similar pressures are given in kilopascals (kPa) in most other fields, where the hecto- prefix is rarely used. The inch of mercury is still used in the United States. Oceanographers usually measure underwater pressure in decibars (dbar) because pressure in the ocean increases by approximately one decibar per metre depth. The standard atmosphere (atm) is an established constant. It is approximately equal to typical air pressure at earth mean sea level and is defined as 101325 Pa. Because pressure is commonly measured by its ability to displace a column of liquid in a manometer, pressures are often expressed as a depth of a particular fluid (e.g., centimetres of water, millimetres of mercury or inches of mercury). The most common choices are mercury (Hg) and water; water is nontoxic and readily available, while mercury’s high density allows a shorter column (and so a smaller manometer) to be used to measure a given pressure. The pressure exerted by a column of liquid of height h and density ρ is given by the hydrostatic pressure equation p = ρgh, where g is the gravitational acceleration. Fluid density and local gravity can vary from one reading to another depending on local factors, so the height of a fluid column does not define pressure precisely. When millimetres of mercury or inches of mercury are quoted today, these units are not based on a physical column of mercury; rather, they have been given precise definitions that can be expressed in terms of SI units. One millimetre of mercury is approximately equal to one torr. The water-based units still depend on the density of water, a measured, rather than defined, quantity. These manometric units are still encountered in many fields. Blood pressure is measured in millimetres of mercury in most of the world, and lung pressures in centimetres of water are still common. Underwater divers use the metre sea water (msw or MSW) and foot sea water (fsw or FSW) units of pressure, and these are the standard units for pressure gauges used to measure

147 pressure exposure in diving chambers and personal decompression computers. A msw is defined as 0.1 bar, and is not the same as a linear metre of depth, and 33.066 fsw = 1 atm.[4] Note that the pressure conversion from msw to fsw is different from the length conversion: 10 msw = 32.6336 fsw, while 10 m = 32.8083 ft Gauge pressure is often given in units with 'g' appended, e.g. 'kPag', 'barg' or 'psig', and units for measurements of absolute pressure are sometimes given a suffix of 'a', to avoid confusion, for example 'kPaa', 'psia'. However, the US National Institute of Standards and Technology recommends that, to avoid confusion, any modifiers be instead applied to the quantity being measured rather than the unit of measure[5] For example, "p = 100 psi” rather than "p = 100 psig”. Differential pressure is expressed in units with 'd' appended; this type of measurement is useful when considering sealing performance or whether a valve will open or close. Presently or formerly popular pressure units include the following: • atmosphere (atm) • manometric units: • centimetre, inch, millimetre (torr) and micrometre (mTorr, micron) of mercury • Height of equivalent column of water, including millimetre (mm H 2O), centimetre (cm H 2O), metre, inch, and foot of water • imperial and customary units: • kip, short ton-force, long ton-force, pound-force, ounce-force, and poundal per square inch • short ton-force and long ton-force per square inch • fsw (feet sea water) used in underwater diving, particularly in connection with diving pressure exposure and decompression • non-SI metric units: • bar, decibar, millibar • msw (metres sea water), used in underwater diving, particularly in connection with diving pressure exposure and decompression • kilogram-force, or kilopond, per square centimetre (technical atmosphere) • gram-force and tonne-force (metric ton-force) per square centimetre • barye (dyne per square centimetre)

148

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

• kilogram-force and tonne-force per square metre fruit with the flat side it obviously will not cut. But if we take the thin side, it will cut smoothly. The reason is that • sthene per square metre (pieze) the flat side has a greater surface area (less pressure) and so it does not cut the fruit. When we take the thin side, the surface area is reduced and so it cuts the fruit easily and Examples quickly. This is one example of a practical application of pressure. For gases, pressure is sometimes measured not as an absolute pressure, but relative to atmospheric pressure; such measurements are called gauge pressure. An example of this is the air pressure in an automobile tire, which might be said to be “220 kPa (32 psi)", but is actually 220 kPa (32 psi) above atmospheric pressure. Since atmospheric pressure at sea level is about 100 kPa (14.7 psi), the absolute pressure in the tire is therefore about 320 kPa (46.7 psi). In technical work, this is written “a gauge pressure of 220 kPa (32 psi)". Where space is limited, such as on pressure gauges, name plates, graph labels, and table headings, the use of a modifier in parentheses, such as “kPa (gauge)" or “kPa (absolute)", is permitted. In non-SI technical work, a gauge pressure of 32 psi is sometimes written as “32 psig” and an absolute pressure as “32 psia”, though the other methods explained above that avoid attaching characters to the unit of pressure are preferred.[6] Gauge pressure is the relevant measure of pressure wherever one is interested in the stress on storage vessels and the plumbing components of fluidics systems. However, whenever equation-of-state properties, such as densities or changes in densities, must be calculated, pressures must be expressed in terms of their absolute values. For instance, if the atmospheric pressure is 100 kPa, a gas (such as helium) at 200 kPa (gauge) (300 kPa [absolute]) is 50% denser than the same gas at 100 kPa (gauge) (200 kPa [absolute]). Focusing on gauge values, one might erroneously conclude the first sample had twice the density of the second one.

Scalar nature The effects of an external pressure of 700bar on an aluminum cylinder with 5mm wall thickness

As an example of varying pressures, a finger can be pressed against a wall without making any lasting impression; however, the same finger pushing a thumbtack can easily damage the wall. Although the force applied to the surface is the same, the thumbtack applies more pressure because the point concentrates that force into a smaller area. Pressure is transmitted to solid boundaries or across arbitrary sections of fluid normal to these boundaries or sections at every point. Unlike stress, pressure is defined as a scalar quantity. The negative gradient of pressure is called the force density. Another example is of a common knife. If we try to cut a

In a static gas, the gas as a whole does not appear to move. The individual molecules of the gas, however, are in constant random motion. Because we are dealing with an extremely large number of molecules and because the motion of the individual molecules is random in every direction, we do not detect any motion. If we enclose the gas within a container, we detect a pressure in the gas from the molecules colliding with the walls of our container. We can put the walls of our container anywhere inside the gas, and the force per unit area (the pressure) is the same. We can shrink the size of our “container” down to a very small point (becoming less true as we approach the atomic scale), and the pressure will still have a single value at that point. Therefore, pressure is a scalar quantity, not a vector quantity. It has

6.3. PRESSURE magnitude but no direction sense associated with it. Pressure acts in all directions at a point inside a gas. At the surface of a gas, the pressure force acts perpendicular (at right angle) to the surface. A closely related quantity is the stress tensor σ, which relates ⃗ via the linear relation the vector force F⃗ to the vector area A ⃗ . F⃗ = σ A

149 fluid being ideal[8] and incompressible.[8] An ideal fluid is a fluid in which there is no friction, it is inviscid,[8] zero viscosity.[8] The equation for all points of a system filled with a constant-density fluid is p γ

+

v2 2g

+ z = const [9]

This tensor may be expressed as the sum of the viscous where: stress tensor minus the hydrostatic pressure. The negative of the stress tensor is sometimes called the pressure tensor, p = pressure of the fluid but in the following, the term “pressure” will refer only to the scalar pressure. γ = ρg = density·acceleration of gravity = specific weight of the fluid.[8] According to the theory of general relativity, pressure increases the strength of a gravitational field (see stress– energy tensor) and so adds to the mass-energy cause of gravity. This effect is unnoticeable at everyday pressures but is significant in neutron stars, although it has not been experimentally tested.[7]

v = velocity of the fluid g = acceleration of gravity z = elevation p γ 2

6.3.2

Types

Fluid pressure Fluid pressure is the pressure at some point within a fluid, such as water or air (for more information specifically about liquid pressure, see section below). Fluid pressure occurs in one of two situations: 1. an open condition, called “open channel flow”, e.g. the ocean, a swimming pool, or the atmosphere. 2. a closed condition, called “closed conduit”, e.g. a water line or gas line. Pressure in open conditions usually can be approximated as the pressure in “static” or non-moving conditions (even in the ocean where there are waves and currents), because the motions create only negligible changes in the pressure. Such conditions conform with principles of fluid statics. The pressure at any given point of a non-moving (static) fluid is called the hydrostatic pressure.

v 2g

= pressure head = velocity head

Applications • Hydraulic brakes • Artesian well • Blood pressure • Hydraulic head • Plant cell turgidity • Pythagorean cup Explosion or deflagration pressures Explosion or deflagration pressures are the result of the ignition of explosive gases, mists, dust/air suspensions, in unconfined and confined spaces.

Closed bodies of fluid are either “static”, when the fluid is not moving, or “dynamic”, when the fluid can move as in Negative pressures either a pipe or by compressing an air gap in a closed container. The pressure in closed conditions conforms with the While pressures are, in general, positive, there are several situations in which negative pressures may be encountered: principles of fluid dynamics. The concepts of fluid pressure are predominantly attributed to the discoveries of Blaise Pascal and Daniel Bernoulli. Bernoulli’s equation can be used in almost any situation to determine the pressure at any point in a fluid. The equation makes some assumptions about the fluid, such as the

• When dealing in relative (gauge) pressures. For instance, an absolute pressure of 80 kPa may be described as a gauge pressure of −21 kPa (i.e., 21 kPa below an atmospheric pressure of 101 kPa).

150

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

p0 =

1 2 ρv + p 2

where p0 is the stagnation pressure v is the flow velocity p is the static pressure. The pressure of a moving fluid can be measured using a Pitot tube, or one of its variations such as a Kiel probe or low pressure chamber in Bundesleistungszentrum Kienbaum, GerCobra probe, connected to a manometer. Depending on many where the inlet holes are located on the probe, it can measure static pressures or stagnation pressures. • When attractive intermolecular forces (e.g., van der Waals forces or hydrogen bonds) between the partiSurface pressure and surface tension cles of a fluid exceed repulsive forces due to thermal motion. These forces explain ascent of sap in tall There is a two-dimensional analog of pressure – the lateral plants. An apparent negative pressure must act on waforce per unit length applied on a line perpendicular to the ter molecules at the top of any tree taller than 10 m, force. which is the pressure head of water that balances the atmospheric pressure. Intermolecular forces maintain Surface pressure is denoted by π and shares many simcohesion of columns of sap that run continuously in ilar properties with three-dimensional pressure. Properties of surface chemicals can be investigated by measuring xylem from the roots to the top leaves.[10] pressure/area isotherms, as the two-dimensional analog of • The Casimir effect can create a small attractive force Boyle’s law, πA = k, at constant temperature. due to interactions with vacuum energy; this force is sometimes termed “vacuum pressure” (not to be confused with the negative gauge pressure of a vacuum). F π= l • For non-isotropic stresses in rigid bodies, depending on how the orientation of a surface is chosen, the same Surface tension is another example of surface pressure, but distribution of forces may have a component of pos- with a reversed sign, because “tension” is the opposite to itive pressure along one surface normal, with a com- “pressure”. ponent of negative pressure acting along the another surface normal. Pressure of an ideal gas • The stresses in an electromagnetic field are generally non-isotropic, with the pressure normal to Main article: Ideal gas law one surface element (the normal stress) being negative, and positive for surface elements per- In an ideal gas, molecules have no volume and do not interpendicular to this. act. According to the ideal gas law, pressure varies linearly • In the cosmological constant. Stagnation pressure

with temperature and quantity, and inversely with volume.

p=

nRT V

Stagnation pressure is the pressure a fluid exerts when it is where: forced to stop moving. Consequently, although a fluid moving at higher speed will have a lower static pressure, it may p is the absolute pressure of the gas have a higher stagnation pressure when forced to a standstill. Static pressure and stagnation pressure are related by: n is the amount of substance

6.3. PRESSURE T is the absolute temperature

151 where:

V is the volume R is the ideal gas constant.

p is liquid pressure g is gravity at the surface of overlaying material

Real gases exhibit a more complex dependence on the variables of state.[11]

ρ is density of liquid h is height of liquid column or depth within a substance

Vapor pressure Another way of saying this same formula is the following: Main article: Vapor pressure Vapor pressure is the pressure of a vapor in thermodynamic equilibrium with its condensed phases in a closed system. All liquids and solids have a tendency to evaporate into a gaseous form, and all gases have a tendency to condense back to their liquid or solid form. The atmospheric pressure boiling point of a liquid (also known as the normal boiling point) is the temperature at which the vapor pressure equals the ambient atmospheric pressure. With any incremental increase in that temperature, the vapor pressure becomes sufficient to overcome atmospheric pressure and lift the liquid to form vapor bubbles inside the bulk of the substance. Bubble formation deeper in the liquid requires a higher pressure, and therefore higher temperature, because the fluid pressure increases above the atmospheric pressure as the depth increases.

p = density weight × depth The pressure a liquid exerts against the sides and bottom of a container depends on the density and the depth of the liquid. If atmospheric pressure is neglected, liquid pressure against the bottom is twice as great at twice the depth; at three times the depth, the liquid pressure is threefold; etc. Or, if the liquid is two or three times as dense, the liquid pressure is correspondingly two or three times as great for any given depth. Liquids are practically incompressible – that is, their volume can hardly be changed by pressure (water volume decreases by only 50 millionths of its original volume for each atmospheric increase in pressure). Thus, except for small changes produced by temperature, the density of a particular liquid is practically the same at all depths.

Atmospheric pressure pressing on the surface of a liquid The vapor pressure that a single component in a mixture must be taken into account when trying to discover the total contributes to the total pressure in the system is called pressure acting on a liquid. The total pressure of a liquid, partial vapor pressure. then, is ρgh plus the pressure of the atmosphere. When this distinction is important, the term total pressure is used. Otherwise, discussions of liquid pressure refer to pressure withLiquid pressure out regard to the normally ever-present atmospheric pressure. See also: Fluid statics § Pressure in fluids at rest When a person swims under the water, water pressure is felt acting on the person’s eardrums. The deeper that person swims, the greater the pressure. The pressure felt is due to the weight of the water above the person. As someone swims deeper, there is more water above the person and therefore greater pressure. The pressure a liquid exerts depends on its depth. Liquid pressure also depends on the density of the liquid. If someone was submerged in a liquid more dense than water, the pressure would be correspondingly greater. The pressure due to a liquid in liquid columns of constant density or at a depth within a substance is represented by the following formula:

p = ρgh

It is important to recognize that the pressure does not depend on the amount of liquid present. Volume is not the important factor – depth is. The average water pressure acting against a dam depends on the average depth of the water and not on the volume of water held back. For example, a wide but shallow lake with a depth of 3 m (10 ft) exerts only half the average pressure that a small 6 m (20 ft) deep pond does (note that the total force applied to the longer dam will be greater, due to the greater total surface area for the pressure to act upon, but for a given 5 foot section of each dam, the 10ft deep water will apply half the force of 20ft deep water). A person will feel the same pressure whether his/her head is dunked a metre beneath the surface of the water in a small pool or to the same depth in the middle of a large lake. If four vases contain different amounts of water but are all filled to equal depths, then a fish with its head dunked a few centimetres under the surface will be acted on by water pressure that is the same in any of the vases. If the fish

152 swims a few centimetres deeper, the pressure on the fish will increase with depth and be the same no matter which vase the fish is in. If the fish swims to the bottom, the pressure will be greater, but it makes no difference what vase it is in. All vases are filled to equal depths, so the water pressure is the same at the bottom of each vase, regardless of its shape or volume. If water pressure at the bottom of a vase were greater than water pressure at the bottom of a neighboring vase, the greater pressure would force water sideways and then up the narrower vase to a higher level until the pressures at the bottom were equalized. Pressure is depth dependent, not volume dependent, so there is a reason that water seeks its own level. Restating this as energy equation, the energy per unit volume in an ideal, incompressible liquid is constant throughout its vessel. At the surface, gravitational potential energy is large but liquid pressure energy is low. At the bottom of the vessel, all the gravitational potential energy is converted to pressure energy. The sum of pressure energy and gravitational potential energy per unit volume is constant throughout the volume of the fluid and the two energy components change linearly with the depth.[12] Mathematically, it is described by Bernoulli’s equation where velocity head is zero and comparisons per unit volume in the vessel are: p + z = const γ Terms have the same meaning as in section Fluid pressure. Direction of liquid pressure An experimentally determined fact about liquid pressure is that it is exerted equally in all directions.[13] If someone is submerged in water, no matter which way that person tilts his/her head, the person will feel the same amount of water pressure on his/her ears. Because a liquid can flow, this pressure isn't only downward. Pressure is seen acting sideways when water spurts sideways from a leak in the side of an upright can. Pressure also acts upward, as demonstrated when someone tries to push a beach ball beneath the surface of the water. The bottom of a boat is pushed upward by water pressure (buoyancy). When a liquid presses against a surface, there is a net force that is perpendicular to the surface. Although pressure doesn't have a specific direction, force does. A submerged triangular block has water forced against each point from many directions, but components of the force that are not perpendicular to the surface cancel each other out, leaving only a net perpendicular point.[13] This is why water spurting from a hole in a bucket initially exits the bucket in a direction at right angles to the surface of the bucket in which

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES the hole is located. Then it curves downward due to gravity. If there are three holes in a bucket (top, bottom, and middle), then the force vectors perpendicular to the inner container surface will increase with increasing depth – that is, a greater pressure at the bottom makes it so that the bottom hole will shoot water out the farthest. The force exerted by a fluid on a smooth surface is always at right angles to the √ surface. The speed of liquid out of the hole is 2gh , where [13] h is the depth below the free surface. Interestingly, this is the same speed the water (or anything else) would have if freely falling the same vertical distance h. Kinematic pressure P = p/ρ0 is the kinematic pressure, where p is the pressure and ρ0 constant mass density. The SI unit of P is m2 /s2 . Kinematic pressure is used in the same manner as kinematic viscosity ν in order to compute Navier–Stokes equation without explicitly showing the density ρ0 . Navier–Stokes equation with kinematic quantities ∂u 2 ∂t + (u∇)u = −∇P + ν∇ u

6.3.3

See also

• Atmospheric pressure • Blood pressure • Boyle’s Law • Combined gas law • Conversion of units • Critical point (thermodynamics) • Dynamic pressure • Hydraulics • Internal pressure • Kinetic theory • Microphone • Orders of magnitude (pressure) • Partial pressure • Pressure measurement • Pressure sensor • Sound pressure

6.4. THERMODYNAMIC TEMPERATURE • Spouting can

153

[12] Streeter, V.L., Fluid Mechanics, Example 3.5, McGraw–Hill Inc. (1966), New York.

• Timeline of temperature and pressure measurement [13] Hewitt 251 (2006) technology • Units conversion by factor-label • Vacuum • Vacuum pump • Vertical pressure variation

6.3.6

External links

• Introduction to Fluid Statics and Dynamics on Project PHYSNET • Pressure being a scalar quantity

6.3.4

Notes

[1] The preferred spelling varies by country and even by industry. Further, both spellings are often used within a particular industry or country. Industries in British English-speaking countries typically use the “gauge” spelling.

6.3.5

References

[1] Giancoli, Douglas G. (2004). Physics: principles with applications. Upper Saddle River, N.J.: Pearson Education. ISBN 0-13-060620-0. [2] McNaught, A. D.; Wilkinson, A.; Nic, M.; Jirat, J.; Kosata, B.; Jenkins, A. (2014). IUPAC. Compendium of Chemical Terminology, 2nd ed. (the “Gold Book”). 2.3.3. Oxford: Blackwell Scientific Publications. doi:10.1351/goldbook.P04819. ISBN 0-9678550-9-8. [3] “14th Conference of the International Bureau of Weights and Measures”. Bipm.fr. Retrieved 2012-03-27.

6.4

Thermodynamic temperature

Thermodynamic temperature is the absolute measure of temperature and is one of the principal parameters of thermodynamics. Thermodynamic temperature is defined by the third law of thermodynamics in which the theoretically lowest temperature is the null or zero point. At this point, absolute zero, the particle constituents of matter have minimal motion and can become no colder.[1][2] In the quantum-mechanical description, matter at absolute zero is in its ground state, which is its state of lowest energy. Thermodynamic temperature is often also called absolute temperature, for two reasons: one, proposed by Kelvin, that it does not depend on the properties of a particular material; two that it refers to an absolute zero according to the properties of the ideal gas.

The International System of Units specifies a particular scale for thermodynamic temperature. It uses the Kelvin [4] US Navy (2006). US Navy Diving Manual, 6th revision. scale for measurement and selects the triple point of water United States: US Naval Sea Systems Command. pp. 2– at 273.16K as the fundamental fixing point. Other scales 32. Retrieved 2008-06-15. have been in use historically. The Rankine scale, using the [5] “Rules and Style Conventions for Expressing Values of degree Fahrenheit as its unit interval, is still in use as part of Quantities”. NIST. Retrieved 2009-07-07. the English Engineering Units in the United States in some engineering fields. ITS-90 gives a practical means of esti[6] NIST, Rules and Style Conventions for Expressing Values of mating the thermodynamic temperature to a very high deQuantities, Sect. 7.4. gree of accuracy. [7] “Einstein’s gravity under pressure”. Springerlink.com. Retrieved 2012-03-27. [8]

[9]

[10] [11]

Roughly, the temperature of a body at rest is a measure of the mean of the energy of the translational, vibrational and Finnemore, John, E. and Joseph B. Franzini (2002). Fluid rotational motions of matter's particle constituents, such as Mechanics: With Engineering Applications. New York: Mc- molecules, atoms, and subatomic particles. The full variety Graw Hill, Inc. pp. 14–29. ISBN 978-0-07-243202-2. of these kinetic motions, along with potential energies of NCEES (2011). Fundamentals of Engineering: Supplied particles, and also occasionally certain other types of parReference Handbook. Clemson, South Carolina: NCEES. ticle energy in equilibrium with these, make up the total internal energy of a substance. Internal energy is loosely p. 64. ISBN 978-1-932613-59-9. called the heat energy or thermal energy in conditions when Karen Wright (March 2003). “The Physics of Negative no work is done upon the substance by its surroundings, or Pressure”. Discover. Retrieved 31 January 2015. by the substance upon the surroundings. Internal energy P. Atkins, J. de Paula Elements of Physical Chemistry, 4th may be stored in a number of ways within a substance, each Ed, W.H. Freeman, 2006. ISBN 0-7167-7329-5. way constituting a “degree of freedom”. At equilibrium,

154

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

each degree of freedom will have on average the same energy: kB T /2 where kB is the Boltzmann constant, unless that degree of freedom is in the quantum regime. The internal degrees of freedom (rotation, vibration, etc.) may be in the quantum regime at room temperature, but the translational degrees of freedom will be in the classical regime except at extremely low temperatures (fractions of kelvins) and it may be said that, for most situations, the thermodynamic temperature is specified by the average translational kinetic energy of the particles.

Temperatures expressed in kelvins are converted to degrees Rankine simply by multiplying by 1.8 as follows: T°R = 1.8TK, where TK and T°R are temperatures in kelvin and degrees Rankine respectively. Temperatures expressed in degrees Rankine are converted to kelvins by dividing by 1.8 as follows: TK = T°R ⁄₁.₈.

6.4.1

Practical realization

Overview

Temperature is a measure of the random submicroscopic motions and vibrations of the particle constituents of matter. These motions comprise the internal energy of a substance. More specifically, the thermodynamic temperature of any bulk quantity of matter is the measure of the average kinetic energy per classical (i.e., non-quantum) degree of freedom of its constituent particles. “Translational motions” are almost always in the classical regime. Translational motions are ordinary, whole-body movements in three-dimensional space in which particles move about and exchange energy in collisions. Figure 1 below shows translational motion in gases; Figure 4 below shows translational motion in solids. Thermodynamic temperature’s null point, absolute zero, is the temperature at which the particle constituents of matter are as close as possible to complete rest; that is, they have minimal motion, retaining only quantum mechanical motion.[3] Zero kinetic energy remains in a substance at absolute zero (see Thermal energy at absolute zero, below). Throughout the scientific world where measurements are made in SI units, thermodynamic temperature is measured in kelvins (symbol: K). Many engineering fields in the U.S. however, measure thermodynamic temperature using the Rankine scale. By international agreement, the unit kelvin and its scale are defined by two points: absolute zero, and the triple point of Vienna Standard Mean Ocean Water (water with a specified blend of hydrogen and oxygen isotopes). Absolute zero, the lowest possible temperature, is defined as being precisely 0 K and −273.15 °C. The triple point of water is defined as being precisely 273.16 K and 0.01 °C. This definition does three things: 1. It fixes the magnitude of the kelvin unit as being precisely 1 part in 273.16 parts the difference between absolute zero and the triple point of water; 2. It establishes that one kelvin has precisely the same magnitude as a one-degree increment on the Celsius scale; and

3. It establishes the difference between the two scales’ null points as being precisely 273.15 kelvins (0 K = −273.15 °C and 273.16 K = 0.01 °C).

Main article: ITS-90 Although the Kelvin and Celsius scales are defined using absolute zero (0 K) and the triple point of water (273.16 K and 0.01 °C), it is impractical to use this definition at temperatures that are very different from the triple point of water. ITS-90 is then designed to represent the thermodynamic temperature as closely as possible throughout its range. Many different thermometer designs are required to cover the entire range. These include helium vapor pressure thermometers, helium gas thermometers, standard platinum resistance thermometers (known as SPRTs, PRTs or Platinum RTDs) and monochromatic radiation thermometers. For some types of thermometer the relationship between the property observed (e.g., length of a mercury column) and temperature, is close to linear, so for most purposes a linear scale is sufficient, without point-by-point calibration. For others a calibration curve or equation is required. The mercury thermometer, invented before the thermodynamic temperature was understood, originally defined the temperature scale; its linearity made readings correlate well with true temperature, i.e. the “mercury” temperature scale was a close fit to the true scale.

6.4.2

The relationship of temperature, motions, conduction, and thermal energy

The nature of kinetic energy, translational motion, and temperature The thermodynamic temperature is a measure of the average energy of the translational, vibrational, and rotational motions of matter's particle constituents (molecules, atoms, and subatomic particles). The full variety of these kinetic motions, along with potential energies of particles, and also occasionally certain other types of particle energy in equilibrium with these, contribute the total internal energy (loosely, the thermal energy) of a substance. Thus, internal energy may be stored in a number of ways (degrees of

6.4. THERMODYNAMIC TEMPERATURE

155 Since there are three translational degrees of freedom (e.g., motion along the x, y, and z axes), the translational kinetic energy is related to the kinetic temperature by: ¯ = 3 kB Tk E 2 where: •

¯ is the mean kinetic energy in joules (J) and is proE nounced “E bar”

• kB = 1.3806504(24)×10−23 J/K is the Boltzmann constant and is pronounced “Kay sub bee” • Tk is the kinetic temperature in kelvins (K) and is pronounced “Tee sub kay” Fig. 1 The translational motion of fundamental particles of nature such as atoms and molecules are directly related to temperature. Here, the size of helium atoms relative to their spacing is shown to scale under 1950 atmospheres of pressure. These roomtemperature atoms have a certain average speed (slowed down here two trillion-fold). At any given instant however, a particular helium atom may be moving much faster than average while another may be nearly motionless. Five atoms are colored red to facilitate following their motions.

freedom) within a substance. When the degrees of freedom are in the classical regime (“unfrozen”) the temperature is very simply related to the average energy of those degrees of freedom at equilibrium. The three translational degrees of freedom are unfrozen except for the very lowest temperatures, and their kinetic energy is simply related to the thermodynamic temperature over the widest range. The heat capacity, which relates heat input and temperature change, is discussed below. The relationship of kinetic energy, mass, and velocity is given by the formula Ek = 1 ⁄2 mv2 .[4] Accordingly, particles with one unit of mass moving at one unit of velocity have precisely the same kinetic energy, and precisely the same temperature, as those with four times the mass but half the Fig. 2 The translational motions of helium atoms occur across a range of speeds. Compare the shape of this curve to that of a Planck velocity. curve in Fig. 5 below.

Except in the quantum regime at extremely low temperatures, the thermodynamic temperature of any bulk quantity of a substance (a statistically significant quantity of particles) is directly proportional to the mean average kinetic energy of a specific kind of particle motion known as translational motion. These simple movements in the three x, y, and z–axis dimensions of space means the particles move in the three spatial degrees of freedom. The temperature derived from this translational kinetic energy is sometimes referred to as kinetic temperature and is equal to the thermodynamic temperature over a very wide range of temperatures.

While the Boltzmann constant is useful for finding the mean kinetic energy of a particle, it’s important to note that even when a substance is isolated and in thermodynamic equilibrium (all parts are at a uniform temperature and no heat is going into or out of it), the translational motions of individual atoms and molecules occur across a wide range of speeds (see animation in Figure 1 above). At any one instant, the proportion of particles moving at a given speed within this range is determined by probability as described by the Maxwell–Boltzmann distribution. The graph shown

156

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

here in Fig. 2 shows the speed distribution of 5500 K helium atoms. They have a most probable speed of 4.780 km/s. However, a certain proportion of atoms at any given instant are moving faster while others are moving relatively slowly; some are momentarily at a virtual standstill (off the x–axis to the right). This graph uses inverse speed for its x–axis so the shape of the curve can easily be compared to the curves in Figure 5 below. In both graphs, zero on the x–axis represents infinite temperature. Additionally, the x and y–axis on both graphs are scaled proportionally. The high speeds of translational motion Although very specialized laboratory equipment is required to directly detect translational motions, the resultant collisions by atoms or molecules with small particles suspended in a fluid produces Brownian motion that can be seen with an ordinary microscope. The translational motions of elementary particles are very fast[5] and temperatures close to absolute zero are required to directly observe them. For instance, when scientists at the NIST achieved a record-setting cold temperature of 700 nK (billionths of a kelvin) in 1994, they used optical lattice laser equipment to adiabatically cool caesium atoms. They then turned off the entrapment lasers and directly measured atom velocities of 7 mm per second in order to calculate their temperature.[6] Formulas for calculating the velocity and speed of translational motion are given in the following footnote.[7]

Fig. 3 Because of their internal structure and flexibility, molecules can store kinetic energy in internal degrees of freedom which contribute to the heat capacity.

rem, which states that for any bulk quantity of a substance in equilibrium, the kinetic energy of particle motion is evenly distributed among all the active (i.e. unfrozen) degrees of freedom available to the particles. Since the internal temperature of molecules are usually equal to their kinetic temperature, the distinction is usually of interest only in the deThe internal motions of molecules and specific heat tailed study of non-local thermodynamic equilibrium (LTE) phenomena such as combustion, the sublimation of solids, There are other forms of internal energy besides the ki- and the diffusion of hot gases in a partial vacuum. netic energy of translational motion. As can be seen in the animation at right, molecules are complex objects; they The kinetic energy stored internally in molecules causes are a population of atoms and thermal agitation can strain substances to contain more internal energy at any given temtheir internal chemical bonds in three different ways: via perature and to absorb additional internal energy for a given rotation, bond length, and bond angle movements. These temperature increase. This is because any kinetic energy are all types of internal degrees of freedom. This makes that is, at a given instant, bound in internal motions is not at molecules distinct from monatomic substances (consisting that same[8]instant contributing to the molecules’ translational of individual atoms) like the noble gases helium and argon, motions. This extra thermal energy simply increases the which have only the three translational degrees of free- amount of energy a substance absorbs for a given temperdom. Kinetic energy is stored in molecules’ internal de- ature rise. This property is known as a substance’s specific grees of freedom, which gives them an internal tempera- heat capacity. ture. Even though these motions are called internal, the ex- Different molecules absorb different amounts of thermal ternal portions of molecules still move—rather like the jig- energy for each incremental increase in temperature; that gling of a stationary water balloon. This permits the two- is, they have different specific heat capacities. High specific way exchange of kinetic energy between internal motions heat capacity arises, in part, because certain substances’ and translational motions with each molecular collision. molecules possess more internal degrees of freedom than Accordingly, as energy is removed from molecules, both others do. For instance, nitrogen, which is a diatomic their kinetic temperature (the temperature derived from the molecule, has five active degrees of freedom at room temkinetic energy of translational motion) and their internal perature: the three comprising translational motion plus temperature simultaneously diminish in equal proportions. two rotational degrees of freedom internally. Since the two This phenomenon is described by the equipartition theo- internal degrees of freedom are essentially unfrozen, in ac-

6.4. THERMODYNAMIC TEMPERATURE cordance with the equipartition theorem, nitrogen has fivethirds the specific heat capacity per mole (a specific number of molecules) as do the monatomic gases.[9] Another example is gasoline (see table showing its specific heat capacity). Gasoline can absorb a large amount of thermal energy per mole with only a modest temperature change because each molecule comprises an average of 21 atoms and therefore has many internal degrees of freedom. Even larger, more complex molecules can have dozens of internal degrees of freedom.

157 lisions, but entire molecules or atoms can move forward into new territory, bringing their kinetic energy with them. Consequently, temperature differences equalize throughout gases very quickly—especially for light atoms or molecules; convection speeds this process even more.[10]

Translational motion in solids, however, takes the form of phonons (see Fig. 4 at right). Phonons are constrained, quantized wave packets that travel at a given substance’s speed of sound. The manner in which phonons interact within a solid determines a variety of its properties, including its thermal conductivity. In electrically insulating solids, [11] and The diffusion of thermal energy: Entropy, phonons, phonon-based heat conduction is usually inefficient such solids are considered thermal insulators (such as glass, and mobile conduction electrons plastic, rubber, ceramic, and rock). This is because in solids, atoms and molecules are locked into place relative to their neighbors and are not free to roam. Metals however, are not restricted to only phonon-based heat conduction. Thermal energy conducts through metals extraordinarily quickly because instead of direct moleculeto-molecule collisions, the vast majority of thermal energy is mediated via very light, mobile conduction electrons. This is why there is a near-perfect correlation between metals’ thermal conductivity and their electrical conductivity.[12] Conduction electrons imbue metals with their extraordinary conductivity because they are delocalized (i.e., not tied to a specific atom) and behave rather like a sort of quantum gas due to the effects of zero-point energy (for more on ZPE, see Note 1 below). Furthermore, electrons are relatively light with a rest mass only 1 ⁄1836 th that of a proton. This is about the same ratio as a .22 Short bullet (29 grains or 1.88 g) compared to the rifle that shoots it. As Isaac Newton wrote with his third law of motion, Fig. 4 The temperature-induced translational motion of particles in solids takes the form of phonons. Shown here are phonons with identical amplitudes but with wavelengths ranging from 2 to 12 molecules.

Law #3: All forces occur in pairs, and these two forces are equal in magnitude and opposite in direction.

However, a bullet accelerates faster than a rifle given an equal force. Since kinetic energy increases as the square of velocity, nearly all the kinetic energy goes into the bullet, not the rifle, even though both experience the same force from the expanding propellant gases. In the same manner, because they are much less massive, thermal energy is readily borne by mobile conduction electrons. Additionally, because they're delocalized and very fast, kinetic thermal energy conducts extremely quickly through metals with abunOne particular heat conduction mechanism occurs when dant conduction electrons. translational motion, the particle motion underlying temperature, transfers momentum from particle to particle in collisions. In gases, these translational motions are The diffusion of thermal energy: Black-body radiation of the nature shown above in Fig. 1. As can be seen in that animation, not only does momentum (heat) dif- Thermal radiation is a byproduct of the collisions arising fuse throughout the volume of the gas through serial col- from various vibrational motions of atoms. These collisions Heat conduction is the diffusion of thermal energy from hot parts of a system to cold. A system can be either a single bulk entity or a plurality of discrete bulk entities. The term bulk in this context means a statistically significant quantity of particles (which can be a microscopic amount). Whenever thermal energy diffuses within an isolated system, temperature differences within the system decrease (and entropy increases).

158

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES Table of thermodynamic temperatures The full range of the thermodynamic temperature scale, from absolute zero to absolute hot, and some notable points between them are shown in the table below.

5500K

Spectral energy density / kJ/m3 nm

8E+11

A

The 2500 K value is approximate. For a true blackbody (which tungsten filaments are not). Tungsten filaments’ emissivity is greater at shorter wavelengths, which makes them appear whiter. C Effective photosphere temperature. D For a true blackbody (which the plasma was not). The Z machine’s dominant emission originated from 40 MK electrons (soft x–ray emissions) within the plasma.

6E+11

B

5000K

4E+11 4500K

2E+11

4000K 3500K

0

0

500

1000

1500

2000

Wavelength / nm

Fig. 5 The spectrum of black-body radiation has the form of a Planck curve. A 5500 K black-body has a peak emittance wavelength of 527 nm. Compare the shape of this curve to that of a Maxwell distribution in Fig. 2 above.

cause the electrons of the atoms to emit thermal photons (known as black-body radiation). Photons are emitted anytime an electric charge is accelerated (as happens when electron clouds of two atoms collide). Even individual molecules with internal temperatures greater than absolute zero also emit black-body radiation from their atoms. In any bulk quantity of a substance at equilibrium, black-body photons are emitted across a range of wavelengths in a spectrum that has a bell curve-like shape called a Planck curve (see graph in Fig. 5 at right). The top of a Planck curve (the peak emittance wavelength) is located in a particular part of the electromagnetic spectrum depending on the temperature of the black-body. Substances at extreme cryogenic temperatures emit at long radio wavelengths whereas extremely hot temperatures produce short gamma rays (see Table of common temperatures).

Fig. 6 Ice and water: two phases of the same substance

Black-body radiation diffuses thermal energy throughout a substance as the photons are absorbed by neighboring atoms, transferring momentum in the process. Black-body photons also easily escape from a substance and can be absorbed by the ambient environment; kinetic energy is lost in the process.

The heat of phase changes The kinetic energy of particle motion is just one contributor to the total thermal energy in a substance; another is phase transitions, which are the potential energy of molecular bonds that can form in a substance as it cools (such as during condensing and freezing). The thermal energy required for a phase transition is called latent heat. This phenomenon may more easily be grasped by considering it in the reverse direction: latent heat is the energy required to break chemical bonds (such as during evaporation and melting). Almost everyone is familiar with the effects of phase transitions; for instance, steam at 100 °C can cause severe burns much faster than the 100 °C air from a hair dryer. This occurs because a large amount of latent heat is liberated as steam condenses into liquid water on the skin.

As established by the Stefan–Boltzmann law, the intensity of black-body radiation increases as the fourth power of absolute temperature. Thus, a black-body at 824 K (just short of glowing dull red) emits 60 times the radiant power as it does at 296 K (room temperature). This is why one can so easily feel the radiant heat from hot objects at a distance. At higher temperatures, such as those found in an incandescent lamp, black-body radiation can be the principal mechanism by which thermal energy escapes a system.

Even though thermal energy is liberated or absorbed during phase transitions, pure chemical elements, compounds, and eutectic alloys exhibit no temperature change whatsoever while they undergo them (see Fig. 7, below right). Consider one particular type of phase transition: melting. When a solid is melting, crystal lattice chemical bonds are being broken apart; the substance is transitioning from what is known as a more ordered state to a less ordered state. In Fig. 7, the melting of ice is shown within the lower left box

6.4. THERMODYNAMIC TEMPERATURE heading from blue to green.

Fig. 7 Water’s temperature does not change during phase transitions as heat flows into or out of it. The total heat capacity of a mole of water in its liquid phase (the green line) is 7.5507 kJ.

159 known as enthalpy of vaporization) is roughly 540 times that required for a one-degree increase.[29] Water’s sizable enthalpy of vaporization is why one’s skin can be burned so quickly as steam condenses on it (heading from red to green in Fig. 7 above). In the opposite direction, this is why one’s skin feels cool as liquid water on it evaporates (a process that occurs at a sub-ambient wetbulb temperature that is dependent on relative humidity). Water’s highly energetic enthalpy of vaporization is also an important factor underlying why solar pool covers (floating, insulated blankets that cover swimming pools when not in use) are so effective at reducing heating costs: they prevent evaporation. For instance, the evaporation of just 20 mm of water from a 1.29-meter-deep pool chills its water 8.4 degrees Celsius (15.1 °F).

Internal energy The total energy of all particle motion translational and internal, including that of conduction elecAt one specific thermodynamic point, the melting point trons, plus the potential energy of phase changes, plus zero(which is 0 °C across a wide pressure range in the case point energy[3] comprise the internal energy of a substance. of water), all the atoms or molecules are, on average, at the maximum energy threshold their chemical bonds can withstand without breaking away from the lattice. Chemical bonds are all-or-nothing forces: they either hold fast, or break; there is no in-between state. Consequently, when a substance is at its melting point, every joule of added thermal energy only breaks the bonds of a specific quantity of its atoms or molecules,[25] converting them into a liquid of precisely the same temperature; no kinetic energy is added to translational motion (which is what gives substances their temperature). The effect is rather like popcorn: at a certain temperature, additional thermal energy can't make the kernels any hotter until the transition (popping) is complete. If the process is reversed (as in the freezing of a liquid), thermal energy must be removed from a substance. As stated above, the thermal energy required for a phase transition is called latent heat. In the specific cases of melting and freezing, it’s called enthalpy of fusion or heat of fusion. If the molecular bonds in a crystal lattice are strong, the heat of fusion can be relatively great, typically in the range of 6 to 30 kJ per mole for water and most of the metallic elements.[26] If the substance is one of the monatomic gases, (which have little tendency to form molecular bonds) the heat of fusion is more modest, ranging from 0.021 to 2.3 kJ per mole.[27] Relatively speaking, phase transitions can be truly energetic events. To completely melt ice at 0 °C into water at 0 °C, one must add roughly 80 times the thermal energy as is required to increase the temperature of the same mass of liquid water by one degree Celsius. The metals’ ratios are even greater, typically in the range of 400 to 1200 times.[28] And the phase transition of boiling is much more energetic than freezing. For instance, the energy required to completely boil or vaporize water (what is

Fig. 8 When many of the chemical elements, such as the noble gases and platinum-group metals, freeze to a solid — the most ordered state of matter — their crystal structures have a closest-packed arrangement. This yields the greatest possible packing density and the lowest energy state.

Internal energy at absolute zero As a substance cools, different forms of internal energy and their related effects simultaneously decrease in magnitude: the latent heat of available phase transitions is liberated as a substance changes from a less ordered state to a more ordered state; the translational motions of atoms and molecules dimin-

160

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

ish (their kinetic temperature decreases); the internal motions of molecules diminish (their internal temperature decreases); conduction electrons (if the substance is an electrical conductor) travel somewhat slower;[30] and black-body radiation’s peak emittance wavelength increases (the photons’ energy decreases). When the particles of a substance are as close as possible to complete rest and retain only ZPE-induced quantum mechanical motion, the substance is at the temperature of absolute zero (T=0). Note that whereas absolute zero is the point of zero thermodynamic temperature and is also the point at which the particle constituents of matter have minimal motion, absolute zero is not necessarily the point at which a substance contains zero thermal energy; one must be very precise with what one means by internal energy. Often, all the phase changes that can occur in a substance, will have occurred by the time it reaches absolute zero. However, this is not always the case. Notably, T=0 helium remains liquid at room pressure and must be under a pressure of at least 25 bar (2.5 MPa) to crystallize. This is because helium’s heat of fusion (the energy required to melt helium ice) is so low (only 21 joules per mole) that the motion-inducing effect of zero-point energy is sufficient to prevent it from freezing at lower pressures. Only if under at least 25 bar (2.5 MPa) of pressure will this latent thermal energy be liberated as helium freezes while approaching absolute zero. A further complication is that many solids change their crystal structure to more compact arrangements at extremely high pressures (up to millions of bars, or hundreds of gigapascals). These are known as solid-solid phase transitions wherein latent heat is liberated as a crystal lattice changes to a more thermodynamically favorable, compact one. The above complexities make for rather cumbersome blanket statements regarding the internal energy in T=0 substances. Regardless of pressure though, what can be said is that at absolute zero, all solids with a lowest-energy crystal lattice such those with a closest-packed arrangement (see Fig. 8, above left) contain minimal internal energy, retaining only that due to the ever-present background of zero-point energy.[3] [31] One can also say that for a given substance at constant pressure, absolute zero is the point of lowest enthalpy (a measure of work potential that takes internal energy, pressure, and volume into consideration).[32] Lastly, it is always true to say that all T=0 substances contain zero kinetic thermal energy.[3] [7]

6.4.3

Helium-4, is a superfluid at or below 2.17 kelvins, (2.17 Celsius degrees above absolute zero)

involving gases. By expressing variables in absolute terms and applying Gay–Lussac’s law of temperature/pressure proportionality, solutions to everyday problems are straightforward; for instance, calculating how a temperature change affects the pressure inside an automobile tire. If the tire has a relatively cold pressure of 200 kPa-gage , then in absolute terms (relative to a vacuum), its pressure is 300 kPaabsolute.[33][34][35] Room temperature (“cold” in tire terms) is 296 K. If the tire pressure is 20 °C hotter (20 kelvins), the solution is calculated as 316 K ⁄₂₉₆ K = 6.8% greater thermodynamic temperature and absolute pressure; that is, a pressure of 320 kPa-absolute, which is 220 kPa-gage.

6.4.4

Definition of thermodynamic temperature

The thermodynamic temperature is defined by the second law of thermodynamics and its consequences. The thermodynamic temperature can be shown to have special properties, and in particular can be seen to be uniquely defined (up to some constant multiplicative factor) by considering the efficiency of idealized heat engines. Thus the ratio T 2 /T 1 of two temperaturesT 1 andT 2 is the same in all absolute scales. Strictly speaking, the temperature of a system is welldefined only if it is at thermal equilibrium. From a microscopic viewpoint, a material is at thermal equilibrium if the quantity of heat between its individual particles cancel out. There are many possible scales of temperature, derived from a variety of observations of physical phenomena.

Practical applications for thermody- Loosely stated, temperature differences dictate the direcnamic temperature tion of heat between two systems such that their combined

energy is maximally distributed among their lowest possiThermodynamic temperature is useful not only for scien- ble states. We call this distribution "entropy". To better untists, it can also be useful for lay-people in many disciplines derstand the relationship between temperature and entropy,

6.4. THERMODYNAMIC TEMPERATURE

161

consider the relationship between heat, work and temperature illustrated in the Carnot heat engine. The engine converts heat into work by directing a temperature gradient between a higher temperature heat source, TH, and a lower temperature heat sync, TC, through a gas filled piston. The work done per cycle is equal to the difference between the heat supplied to the engine by TH, qH, and the heat supplied to TC by the engine, qC. The efficiency of the engine is the work divided by the heat put into the system or

Efficiency =

wcy qH − qC qC = =1− qH qH qH

so that

f (T1 , T3 ) =

g(T3 ) q3 = . g(T1 ) q1

i.e. The ratio of heat exchanged is a function of the respective temperatures at which they occur. We can choose any monotonic function for our g(T ) ; it is a matter of convenience and convention that we choose g(T ) = T . Choosing then one fixed reference temperature (i.e. triple point of water), we establish the thermodynamic temperature scale.

(1)

It is to be noted that such a definition coincides with that of the ideal gas derivation; also it is this definition of the where w is the work done per cycle. Thus the efficiency thermodynamic temperature that enables us to represent the depends only on qC/qH. Carnot efficiency in terms of TH and TC, and hence derive Carnot’s theorem states that all reversible engines operating that the (complete) Carnot cycle is isentropic: between the same heat reservoirs are equally efficient. Thus, any reversible heat engine operating between temperatures TC T 1 and T 2 must have the same efficiency, that is to say, the qC = f (TH , TC ) = . (3). efficiency is the function of only temperatures qH TH qC = f (TH , TC ) qH

(2).

In addition, a reversible heat engine operating between temperatures T 1 and T 3 must have the same efficiency as one consisting of two cycles, one between T 1 and another (intermediate) temperature T 2 , and the second between T 2 andT 3 . If this were not the case, then energy (in the form of Q) will be wasted or gained, resulting in different overall efficiencies every time a cycle is split into component cycles; clearly a cycle can be composed of any number of smaller cycles. With this understanding of Q1 , Q2 and Q3 , we note also that mathematically,

Substituting this back into our first formula for efficiency yields a relationship in terms of temperature:

Efficiency = 1 −

qC TC =1− qH TH

(4).

Notice that for TC=0 the efficiency is 100% and that efficiency becomes greater than 100% for TC0 K gases. However, in T=0 condensed matter; e.g., solids and liquids, ZPE causes inter-atomic jostling where atoms would otherwise be perfectly stationary. Inasmuch as the real-world effects that ZPE has on substances can vary as one alters a thermodynamic system (for example, due to ZPE, helium won't freeze unless under a pressure of at least 25 bar or 2.5 MPa), ZPE is very much a form of thermal energy and may properly be included when tallying a substance’s internal energy. Note too that absolute zero serves as the baseline atop which thermodynamics and its equations are founded because they deal with the exchange of thermal energy between “systems” (a plurality of particles and fields modeled as an average). Accordingly, one may examine ZPE-induced particle motion within a system that is at absolute zero but there can never be a net outflow of thermal energy from such a system. Also, the peak emittance wavelength of black-body radiation shifts to infinity at absolute zero; indeed, a peak no longer exists and black-body photons can no longer escape. Because of ZPE, however, virtual photons are still emitted at T=0. Such photons are called “virtual” because they can't be intercepted and observed. Furthermore, this zero-point radiation has a unique zero-point spectrum. However, even though a T=0 system emits zero-point radiation, no net heat flow Q out of such a system can occur because if the surrounding environment is at a temperature greater than T=0, heat will flow inward, and if the surrounding environment is at T=0, there will be an equal flux of ZP radiation both inward and outward. A similar Q equilibrium exists at T=0 with the ZPE-induced spontaneous emission of photons (which is more properly called a stimulated emission in this context). The graph at upper right illustrates the relationship of absolute zero to zero-point energy. The graph also helps

in the understanding of how zero-point energy got its name: it is the vibrational energy matter retains at the zero kelvin point. Derivation of the classical electromagnetic zero-point radiation spectrum via a classical thermodynamic operation involving van der Waals forces, Daniel C. Cole, Physical Review A, 42 (1990) 1847. [4] At non-relativistic temperatures of less than about 30 GK, classical mechanics are sufficient to calculate the velocity of particles. At 30 GK, individual neutrons (the constituent of neutron stars and one of the few materials in the universe with temperatures in this range) have a 1.0042 γ (gamma or Lorentz factor). Thus, the classic Newtonian formula for kinetic energy is in error less than half a percent for temperatures less than 30 GK. [5] Even room–temperature air has an average molecular translational speed (not vector-isolated velocity) of 1822 km/hour. This is relatively fast for something the size of a molecule considering there are roughly 2.42×1016 of them crowded into a single cubic millimeter. Assumptions: Average molecular weight of wet air = 28.838 g/mol and T = 296.15 K. Assumption’s primary variables: An altitude of 194 meters above mean sea level (the world–wide median altitude of human habitation), an indoor temperature of 23 °C, a dewpoint of 9 °C (40.85% relative humidity), and 760 mmHg (101.325 kPa) sea level–corrected barometric pressure. [6] Adiabatic Cooling of Cesium to 700 nK in an Optical Lattice, A. Kastberg et al., Physical Review Letters 74 (1995) 1542 doi:10.1103/PhysRevLett.74.1542. It’s noteworthy that a record cold temperature of 450 pK in a Bose–Einstein condensate of sodium atoms (achieved by A. E. Leanhardt et al.. of MIT) equates to an average vector-isolated atom velocity of 0.4 mm/s and an average atom speed of 0.7 mm/s. [7] The rate of translational motion of atoms and molecules is calculated based on thermodynamic temperature as follows: √ kB T v¯ = m where: • v¯ is the vector-isolated mean velocity of translational particle motion in m/s • kB is the Boltzmann constant = 1.3806504(24)×10−23 J/K • T is the thermodynamic temperature in kelvins • m is the molecular mass of substance in kilograms In the above formula, molecular mass, m, in kilograms per particle is the quotient of a substance’s molar mass (also known as atomic weight, atomic mass, relative atomic mass, and unified atomic mass units) in g/mol or daltons divided by 6.02214179(30)×1026 (which is the Avogadro constant times one thousand). For diatomic molecules such as H2 , N2 , and O2 , multiply atomic weight by two before plugging it into the above formula. The mean speed (not vector-isolated

6.4. THERMODYNAMIC TEMPERATURE

velocity) of an atom or molecule along any arbitrary path is calculated as follows: √ s¯ = v¯ 3 where: • s¯ is the mean speed of translational particle motion in m/s Note that the mean energy of the translational motions of a substance’s constituent particles correlates to their mean speed, not velocity. Thus, substituting s¯ for v in the classic formula for kinetic energy, Ek = 1 ⁄2 m • v 2 produces precisely the same value as does Emean = 3/2kBT (as shown in the section titled The nature of kinetic energy, translational motion, and temperature). Note too that the Boltzmann constant and its related formulas establish that absolute zero is the point of both zero kinetic energy of particle motion and zero kinetic velocity (see also Note 1 above). [8] The internal degrees of freedom of molecules cause their external surfaces to vibrate and can also produce overall spinning motions (what can be likened to the jiggling and spinning of an otherwise stationary water balloon). If one examines a single molecule as it impacts a containers’ wall, some of the kinetic energy borne in the molecule’s internal degrees of freedom can constructively add to its translational motion during the instant of the collision and extra kinetic energy will be transferred into the container’s wall. This would induce an extra, localized, impulse-like contribution to the average pressure on the container. However, since the internal motions of molecules are random, they have an equal probability of destructively interfering with translational motion during a collision with a container’s walls or another molecule. Averaged across any bulk quantity of a gas, the internal thermal motions of molecules have zero net effect upon the temperature, pressure, or volume of a gas. Molecules’ internal degrees of freedom simply provide additional locations where internal energy is stored. This is precisely why molecular-based gases have greater specific heat capacity than monatomic gases (where additional thermal energy must be added to achieve a given temperature rise). [9] When measured at constant-volume since different amounts of work must be performed if measured at constant-pressure. Nitrogen’s CvH (100 kPa, 20 °C) equals 20.8 J mol−1 K−1 vs. the monatomic gases, which equal 12.4717 J mol−1 K−1 . Citations: W.H. Freeman’s Physical Chemistry, Part 3: Change (422 kB PDF, here), Exercise 21.20b, p. 787. Also Georgia State University’s Molar Specific Heats of Gases. [10] The speed at which thermal energy equalizes throughout the volume of a gas is very rapid. However, since gases have extremely low density relative to solids, the heat flux (the thermal power passing per area) through gases is comparatively low. This is why the dead-air spaces in multi-pane windows have insulating qualities. [11] Diamond is a notable exception. Highly quantized modes of phonon vibration occur in its rigid crystal lattice. Therefore,

169

not only does diamond have exceptionally poor specific heat capacity, it also has exceptionally high thermal conductivity. [12] Correlation is 752 (W m−1 K−1 ) /(MS·cm), σ = 81, through a 7:1 range in conductivity. Value and standard deviation based on data for Ag, Cu, Au, Al, Ca, Be, Mg, Rh, Ir, Zn, Co, Ni, Os, Fe, Pa, Pt, and Sn. Citation: Data from CRC Handbook of Chemistry and Physics, 1st Student Edition and this link to Web Elements’ home page. [13] The cited emission wavelengths are for true black bodies in equilibrium. In this table, only the sun so qualifies. CODATA 2006 recommended value of 2.897 7685(51) × 10−3 m K used for Wien displacement law constant b. [14] A record cold temperature of 450 ±80 pK in a Bose– Einstein condensate (BEC) of sodium atoms was achieved in 2003 by researchers at MIT. Citation: Cooling Bose–Einstein Condensates Below 500 Picokelvin, A. E. Leanhardt et al., Science 301, 12 Sept. 2003, Pg. 1515. It’s noteworthy that this record’s peak emittance black-body wavelength of 6,400 kilometers is roughly the radius of Earth. [15] The peak emittance wavelength of 2.897 77 m is a frequency of 103.456 MHz [16] Measurement was made in 2002 and has an uncertainty of ±3 kelvins. A 1989 measurement produced a value of 5777 ±2.5 K. Citation: Overview of the Sun (Chapter 1 lecture notes on Solar Physics by Division of Theoretical Physics, Dept. of Physical Sciences, University of Helsinki). Download paper (252 kB PDF) [17] The 350 MK value is the maximum peak fusion fuel temperature in a thermonuclear weapon of the Teller–Ulam configuration (commonly known as a “hydrogen bomb”). Peak temperatures in Gadget-style fission bomb cores (commonly known as an “atomic bomb”) are in the range of 50 to 100 MK. Citation: Nuclear Weapons Frequently Asked Questions, 3.2.5 Matter At High Temperatures. Link to relevant Web page. All referenced data was compiled from publicly available sources. [18] Peak temperature for a bulk quantity of matter was achieved by a pulsed-power machine used in fusion physics experiments. The term “bulk quantity” draws a distinction from collisions in particle accelerators wherein high “temperature” applies only to the debris from two subatomic particles or nuclei at any given instant. The >2 GK temperature was achieved over a period of about ten nanoseconds during “shot Z1137.” In fact, the iron and manganese ions in the plasma averaged 3.58 ±0.41 GK (309 ±35 keV) for 3 ns (ns 112 through 115). Citation: Ion Viscous Heating in a Magnetohydrodynamically Unstable Z Pinch at Over 2 × 109 Kelvin, M. G. Haines et al., Physical Review Letters 96, Issue 7, id. 075003. Link to Sandia’s news release. [19] Core temperature of a high–mass (>8–11 solar masses) star after it leaves the main sequence on the Hertzsprung–Russell diagram and begins the alpha process (which lasts one day) of fusing silicon–28 into heavier elements in the following

170

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

steps: sulfur–32 → argon–36 → calcium–40 → titanium– 44 → chromium–48 → iron–52 → nickel–56. Within minutes of finishing the sequence, the star explodes as a Type II supernova. Citation: Stellar Evolution: The Life and Death of Our Luminous Neighbors (by Arthur Holland and Mark Williams of the University of Michigan). Link to Web site. More informative links can be found here, and here, and a concise treatise on stars by NASA is here. Archived July 20, 2015, at the Wayback Machine. [20] Based on a computer model that predicted a peak internal temperature of 30 MeV (350 GK) during the merger of a binary neutron star system (which produces a gamma–ray burst). The neutron stars in the model were 1.2 and 1.6 solar masses respectively, were roughly 20 km in diameter, and were orbiting around their barycenter (common center of mass) at about 390 Hz during the last several milliseconds before they completely merged. The 350 GK portion was a small volume located at the pair’s developing common core and varied from roughly 1 to 7 km across over a time span of around 5 ms. Imagine two city-sized objects of unimaginable density orbiting each other at the same frequency as the G4 musical note (the 28th white key on a piano). It’s also noteworthy that at 350 GK, the average neutron has a vibrational speed of 30% the speed of light and a relativistic mass (m) 5% greater than its rest mass (m0 ). Citation: Torus Formation in Neutron Star Mergers and Well-Localized Short Gamma-Ray Bursts, R. Oechslin et al. of Max Planck Institute for Astrophysics., arXiv:astro-ph/0507099 v2, 22 Feb. 2006. Download paper (725 kB PDF) (from Cornell University Library’s arXiv.org server). To view a browser-based summary of the research, click here. [21] NewScientist: Eight extremes: The hottest thing in the universe, 07 March 2011, which stated “While the details of this process are currently unknown, it must involve a fireball of relativistic particles heated to something in the region of a trillion kelvin” [22] Results of research by Stefan Bathe using the PHENIX detector on the Relativistic Heavy Ion Collider at Brookhaven National Laboratory in Upton, New York, U.S.A. Bathe has studied gold-gold, deuteron-gold, and proton-proton collisions to test the theory of quantum chromodynamics, the theory of the strong force that holds atomic nuclei together. Link to news release. [23] Citation: How do physicists study particles? by CERN. [24] The Planck frequency equals 1.854 87(14) × 1043 Hz (which is the reciprocal of one Planck time). Photons at the Planck frequency have a wavelength of one Planck length. The Planck temperature of 1.416 79(11) × 1032 K equates to a calculated b /T = λmax wavelength of 2.045 31(16) × 10−26 nm. However, the actual peak emittance wavelength quantizes to the Planck length of 1.616 24(12) × 10−26 nm. [25] Water’s enthalpy of fusion (0 °C, 101.325 kPa) equates to 0.062284 eV per molecule so adding one joule of thermal energy to 0 °C water ice causes 1.0021×1020 water

molecules to break away from the crystal lattice and become liquid. [26] Water’s enthalpy of fusion is 6.0095 kJ mol−1 K−1 (0 °C, 101.325 kPa). Citation: Water Structure and Science, Water Properties, Enthalpy of fusion, (0 °C, 101.325 kPa) (by London South Bank University). Link to Web site. The only metals with enthalpies of fusion not in the range of 6–30 J mol−1 K−1 are (on the high side): Ta, W, and Re; and (on the low side) most of the group 1 (alkaline) metals plus Ga, In, Hg, Tl, Pb, and Np. Citation: This link to Web Elements’ home page. [27] Xenon value citation: This link to WebElements’ xenon data (available values range from 2.3 to 3.1 kJ/mol). It is also noteworthy that helium’s heat of fusion of only 0.021 kJ/mol is so weak of a bonding force that zero-point energy prevents helium from freezing unless it is under a pressure of at least 25 atmospheres. [28] CRC Handbook of Chemistry and Physics, 1st Student Edition and Web Elements. [29] H2 Ospecific heat capacity, Cp = 0.075327 kJ mol−1 K−1 (25 °C); Enthalpy of fusion = 6.0095 kJ/mol (0 °C, 101.325 kPa); Enthalpy of vaporization (liquid) = 40.657 kJ/mol (100 °C). Citation: Water Structure and Science, Water Properties (by London South Bank University). Link to Web site. [30] Mobile conduction electrons are delocalized, i.e. not tied to a specific atom, and behave rather like a sort of quantum gas due to the effects of zero-point energy. Consequently, even at absolute zero, conduction electrons still move between atoms at the Fermi velocity of about 1.6×106 m/s. Kinetic thermal energy adds to this speed and also causes delocalized electrons to travel farther away from the nuclei. [31] No other crystal structure can exceed the 74.048% packing density of a closest-packed arrangement. The two regular crystal lattices found in nature that have this density are hexagonal close packed (HCP) and face-centered cubic (FCC). These regular lattices are at the lowest possible energy state. Diamond is a closest-packed structure with an FCC crystal lattice. Note too that suitable crystalline chemical compounds, although usually composed of atoms of different sizes, can be considered as closest-packed structures when considered at the molecular level. One such compound is the common mineral known as magnesium aluminum spinel (MgAl2 O4 ). It has a face-centered cubic crystal lattice and no change in pressure can produce a lattice with a lower energy state. [32] Nearly half of the 92 naturally occurring chemical elements that can freeze under a vacuum also have a closest-packed crystal lattice. This set includes beryllium, osmium, neon, and iridium (but excludes helium), and therefore have zero latent heat of phase transitions to contribute to internal energy (symbol: U). In the calculation of enthalpy (formula: H = U + pV), internal energy may exclude different sources of thermal energy (particularly ZPE) depending on the nature of the analysis. Accordingly, all T=0 closest-packed

6.5. VOLUME

171

matter under a perfect vacuum has either minimal or zero enthalpy, depending on the nature of the analysis. Use Of Legendre Transforms In Chemical Thermodynamics, Robert A. Alberty, Pure Appl.Chem., 73 (2001) 1349.

of thermometer at least as early as 1850. The OED also cites this 1928 reporting of a temperature: “My altitude was about 5,800 metres, the temperature was 28° Celsius”. However, dictionaries seek to find the earliest use of a word or term and are not a useful resource as regards the terminology used throughout the history of science. According to several writings of Dr. Terry Quinn CBE FRS, Director of the BIPM (1988–2004), including Temperature Scales from the early days of thermometry to the 21st century (148 kB PDF, here) as well as Temperature (2nd Edition / 1990 / Academic Press / 0125696817), the term Celsius in connection with the centigrade scale was not used whatsoever by the scientific or thermometry communities until after the CIPM and CGPM adopted the term in 1948. The BIPM wasn't even aware that degree Celsius was in sporadic, non-scientific use before that time. It’s also noteworthy that the twelve-volume, 1933 edition of OED did not even have a listing for the word Celsius (but did have listings for both centigrade and centesimal in the context of temperature measurement). The 1948 adoption of Celsius accomplished three objectives:

[33] Pressure also must be in absolute terms. The air still in a tire at 0 kPa-gage expands too as it gets hotter. It’s not uncommon for engineers to overlook that one must work in terms of absolute pressure when compensating for temperature. For instance, a dominant manufacturer of aircraft tires published a document on temperature-compensating tire pressure, which used gage pressure in the formula. However, the high gage pressures involved (180 psi; 12.4 bar; 1.24 MPa) means the error would be quite small. With low-pressure automobile tires, where gage pressures are typically around 2 bar (200 kPa), failing to adjust to absolute pressure results in a significant error. Referenced document: Aircraft Tire Ratings (155 kB PDF, here). [34] Regarding the spelling “gage” vs. “gauge” in the context of pressures measured relative to atmospheric pressure, the preferred spelling varies by country and even by industry. Further, both spellings are often used within a particular industry or country. Industries in British English-speaking countries typically use the spelling “gauge pressure” to distinguish it from the pressure-measuring instrument, which in the U.K., is spelled pressure gage. For the same reason, many of the largest American manufacturers of pressure transducers and instrumentation use the spelling gage pressure (the convention used here) in their formal documentation to distinguish it from the instrument, which is spelled pressure gauge. (see Honeywell-Sensotec’s FAQ page and Fluke Corporation’s product search page). [35] A difference of 100 kPa is used here instead of the 101.325 kPa value of one standard atmosphere. In 1982, the International Union of Pure and Applied Chemistry (IUPAC) recommended that for the purposes of specifying the physical properties of substances, the standard pressure (atmospheric pressure) should be defined as precisely 100 kPa (≈750.062 Torr). Besides being a round number, this had a very practical effect: relatively few people live and work at precisely sea level; 100 kPa equates to the mean pressure at an altitude of about 112 meters, which is closer to the 194– meter, worldwide median altitude of human habitation. For especially low-pressure or high-accuracy work, true atmospheric pressure must be measured. Citation: IUPAC.org, Gold Book, Standard Pressure [36] Absolute Zero and the Conquest of Cold , Shachtman, Tom., Mariner Books, 1999. [37] A Brief History of Temperature Measurement and; Uppsala University (Sweden), Linnaeus’ thermometer

(a) All common temperature scales would have their units named after someone closely associated with them; namely, Kelvin, Celsius, Fahrenheit, Réaumur and Rankine. (b) Notwithstanding the important contribution of Linnaeus who gave the Celsius scale its modern form, Celsius’s name was the obvious choice because it began with the letter C. Thus, the symbol °C that for centuries had been used in association with the name centigrade could continue to be used and would simultaneously inherit an intuitive association with the new name. (c) The new name eliminated the ambiguity of the term centigrade, freeing it to refer exclusively to the Frenchlanguage name for the unit of angular measurement.

6.4.8

External links

• Kinetic Molecular Theory of Gases. An explanation (with interactive animations) of the kinetic motion of molecules and how it affects matter. By David N. Blauch, Department of Chemistry, Davidson College. • Zero Point Energy and Zero Point Field. A Web site with in-depth explanations of a variety of quantum effects. By Bernard Haisch, of Calphysics Institute.

6.5

Volume

[38] bipm.org [39] According to The Oxford English Dictionary (OED), the term “Celsius’s thermometer” had been used at least as early as 1797. Further, the term “The Celsius or Centigrade thermometer” was again used in reference to a particular type

For the general geometric concept, see volume. In thermodynamics, the volume of a system is an important extensive parameter for describing its thermodynamic

172

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

state. The specific volume, an intensive property, is the system’s volume per unit of mass. Volume is a function of state and is interdependent with other thermodynamic properties such as pressure and temperature. For example, volume is related to the pressure and temperature of an ideal gas by the ideal gas law.

Volume is one of a pair of conjugate variables, the other being pressure. As with all conjugate pairs, the product is a form of energy. The product pV is the energy lost to a system due to mechanical work. This product is one term which makes up enthalpy H :

The physical volume of a system may or may not coincide H = U + pV, with a control volume used to analyze the system. where U is the internal energy of the system. The second law of thermodynamics describes constraints on the amount of useful work which can be extracted from a thermodynamic system. In thermodynamic systems where The volume of a thermodynamic system typically refers the temperature and volume are held constant, the measure to the volume of the working fluid, such as, for example, of “useful” work attainable is the Helmholtz free energy; the fluid within a piston. Changes to this volume may be and in systems where the volume is not held constant, the made through an application of work, or may be used to measure of useful work attainable is the Gibbs free energy. produce work. An isochoric process however operates at a constant-volume, thus no work can be produced. Many Similarly, the appropriate value of heat capacity to use in other thermodynamic processes will result in a change in a given process depends on whether the process produces volume. A polytropic process, in particular, causes changes a change in volume. The heat capacity is a function of the to the system so that the quantity pV n is constant (where p amount of heat added to a system. In the case of a constantis pressure, V is volume, and n is the polytropic index, a volume process, all the heat affects the internal energy of constant). Note that for specific polytropic indexes a poly- the system (i.e., there is no pV-work, and all the heat aftropic process will be equivalent to a constant-property pro- fects the temperature). However in a process without a concess. For instance, for very large values of n approaching stant volume, the heat addition affects both the internal energy and the work (i.e., the enthalpy); thus the temperature infinity, the process becomes constant-volume. changes by a different amount than in the constant-volume Gases are compressible, thus their volumes (and specific case and a different heat capacity value is required. volumes) may be subject to change during thermodynamic processes. Liquids, however, are nearly incompressible, thus their volumes can be often taken as constant. In 6.5.3 Specific volume general, compressibility is defined as the relative volume change of a fluid or solid as a response to a pressure, and See also: Specific volume may be determined for substances in any phase. Similarly, thermal expansion is the tendency of matter to change in Specific volume ( ν ) is the volume occupied by a unit of volume in response to a change in temperature. mass of a material.[1] In many cases the specific volume is a Many thermodynamic cycles are made up of varying pro- useful quantity to determine because, as an intensive propcesses, some which maintain a constant volume and some erty, it can be used to determine the complete state of a syswhich do not. A vapor-compression refrigeration cycle, tem in conjunction with another independent intensive varifor example, follows a sequence where the refrigerant fluid able. The specific volume also allows systems to be studied transitions between the liquid and vapor states of matter. without reference to an exact operating volume, which may 3 Typical units for volume are m (cubic meters), l (liters), not be known (nor significant) at some stages of analysis.

6.5.1

Overview

and ft3 (cubic feet).

6.5.2

Heat and work

Mechanical work performed on a working fluid causes a change in the mechanical constraints of the system; in other words, for work to occur, the volume must be altered. Hence volume is an important parameter in characterizing many thermodynamic processes where an exchange of energy in the form of work is involved.

The specific volume of a substance is equal to the reciprocal 3 of its mass density. Specific volume may be expressed in mkg ,

ft3 lbm

ν=

,

ft3 slug

, or

mL g

.

V 1 = m ρ

where, V is the volume, m is the mass and ρ is the density of the material. For an ideal gas,

6.5. VOLUME

173 General conversion

¯ RT P

To compare gas volume between two conditions of different temperature or pressure (1 and 2), assuming nR are the ¯ where, R is the specific gas constant, T is the temperature same, the following equation uses humidity exclusion in addition to the ideal gas law: and P is the pressure of the gas. ν=

Specific volume may also refer to molar volume.

V2 = V1 ×

T2 T1

×

p1 −pw,1 p2 −pw,2

Where, in addition to terms used in the ideal gas law:

6.5.4

Gas volume

• pw is the partial pressure of gaseous water during condition 1 and 2, respectively

Dependence on pressure and temperature

The volume of gas increases proportionally to absolute tem- For example, calculating how much 1 liter of air (a) at 0 °C, perature and decreases inversely proportionally to pressure, 100 kPa, pw = 0 kPa (known as STPD, see below) would fill when breathed into the lungs where it is mixed with water approximately according to the ideal gas law: vapor (l), where it quickly becomes 37 °C, 100 kPa, pw = V = nRT p 6.2 kPa (BTPS): where: 310 K 100 kPa−0 kPa Vl = 1 l × 273 K × 100 kPa−6.2 kPa = 1.21 l • p is the pressure

Common conditions

• V is the volume • n is the amount of substance of gas (moles) • R is the gas constant, 8.314 J·K mol −1

−1

• T is the absolute temperature To simplify, a volume of gas may be expressed as the volume it would have in standard conditions for temperature and pressure, which are 0 °C and 100 kPa.[2] Humidity exclusion In contrast to other gas components, water content in air, or humidity, to a higher degree depends on vaporization and condensation from or into water, which, in turn, mainly depends on temperature. Therefore, when applying more pressure to a gas saturated with water, all components will initially decrease in volume approximately according to the ideal gas law. However, some of the water will condense until returning to almost the same humidity as before, giving the resulting total volume deviating from what the ideal gas law predicted. Conversely, decreasing temperature would also make some water condense, again making the final volume deviating from predicted by the ideal gas law. Therefore, gas volume may alternatively be expressed excluding the humidity content: V (volume dry). This fraction more accurately follows the ideal gas law. On the contrary V (volume saturated) is the volume a gas mixture would have if humidity was added to it until saturation (or 100% relative humidity).

Some common expressions of gas volume with defined or variable temperature, pressure and humidity inclusion are: • ATPS: Ambient temperature (variable) and pressure (variable), saturated (humidity depends on temperature) • ATPD: Ambient temperature (variable) and pressure (variable), dry (no humidity) • BTPS: Body Temperature (37 °C or 310 K) and pressure (generally same as ambient), saturated (47 mmHg or 6.2 kPa) • STPD: Standard temperature (0 °C or 273 K) and pressure (760 mmHg (101.33 kPa) or 100 kPa (750.06 mmHg)), dry (no humidity) Conversion factors The following conversion factors can be used to convert between expressions for volume of a gas:[3] Partial volume See also: Partial pressure The partial volume of a particular gas is the volume which the gas would have if it alone occupied the volume, with unchanged pressure and temperature, and is useful in gas

174

CHAPTER 6. CHAPTER 6. SYSTEM PROPERTIES

mixtures, e.g. air, to focus on one particular gas component, e.g. oxygen. It can be approximated both from partial pressure and molar fraction:[4] Vx = Vtot ×

Px Ptot

= Vtot ×

nx ntot

• Vx is the partial volume of any individual gas component (X) • Vtot is the total volume in gas mixture • Px is the partial pressure of gas X • Ptot is the total pressure in gas mixture • nx is the amount of substance of a gas (X) • ntot is the total amount of substance in gas mixture

6.5.5

See also

• Volumetric flow rate

6.5.6

References

[1] Cengel, Yunus A.; Boles, Michael A. (2002). Thermodynamics: an engineering approach. Boston: McGraw-Hill. p. 11. ISBN 0-07-238332-1. [2] A. D. McNaught, A. Wilkinson (1997). Compendium of Chemical Terminology, The Gold Book (2nd ed.). Blackwell Science. ISBN 0-86542-684-8. [3] Brown, Stanley; Miller, Wayne; Eason, M (2006). Exercise Physiology: Basis of Human Movement in Health and Disease. Lippincott Williams & Wilkins. p. 113. ISBN 07817-3592-0. Retrieved 13 February 2014. [4] Page 200 in: Medical biophysics. Flemming Cornelius. 6th Edition, 2008.

Chapter 7

Chapter 7 7.1 Thermodynamic system

rather than of states of the system; such were historically important in the conceptual development of the subject; and (b) systems considered in terms of processes described by A thermodynamic system is the material and radiative steady flows; such are important in engineering. content of a macroscopic volume in space, that can be ade- In 1824 Sadi Carnot described a thermodynamic system as quately described by thermodynamic state variables such as the working substance (such as the volume of steam) of temperature, entropy, internal energy and pressure. Usu- any heat engine under study. The very existence of such ally, by default, a thermodynamic system is taken to be thermodynamic systems may be considered a fundamental in its own internal state of thermodynamic equilibrium, postulate of equilibrium thermodynamics, though it is only as opposed to a non-equilibrium state. The thermody- rarely cited as a numbered law.[1][2][3] According to Bainamic system is always enclosed by walls that separate lyn, the commonly rehearsed statement of the zeroth law it from its surroundings; these constrain the system. A of thermodynamics is a consequence of this fundamental thermodynamic system is subject to external interventions postulate.[4] called thermodynamic operations; these alter the system’s walls or its surroundings; as a result, the system under- In equilibrium thermodynamics the state variables do not goes thermodynamic processes according to the principles include fluxes because in a state of thermodynamic equilibof thermodynamics. (This account mainly refers to the sim- rium all fluxes have zero values by postulation. Equilibrium plest kind of thermodynamic system; compositions of sim- thermodynamic processes may of course involve fluxes but these must have ceased by the time a thermodynamic prople systems may also be considered.) cess or operation is complete bringing a system to its evenThe thermodynamic state of a thermodynamic system is its tual thermodynamic state. Non-equilibrium thermodynaminternal state as specified by its state variables. In addi- ics allows its state variables to include non-zero fluxes, that tion to the state variables, a thermodynamic account also describe transfers of matter or energy or entropy between a requires a special kind of quantity called a state function, system and its surroundings.[5] which is a function of the defining state variables. For example, if the state variables are internal energy, volume and mole amounts, that special function is the entropy. These 7.1.1 Overview quantities are inter-related by one or more functional relationships called equations of state, and by the system’s char- Thermodynamic equilibrium is characterized by absence of acteristic equation. Thermodynamics imposes restrictions flow of matter or energy. Equilibrium thermodynamics, as on the possible equations of state and on the characteris- a subject in physics, considers macroscopic bodies of mattic equation. The restrictions are imposed by the laws of ter and energy in states of internal thermodynamic equithermodynamics. librium. It uses the concept of thermodynamic processes, According to the permeabilities of the walls of a system, transfers of energy and matter occur between it and its surroundings, which are assumed to be unchanging over time, until a state of thermodynamic equilibrium is attained. The only states considered in equilibrium thermodynamics are equilibrium states. Classical thermodynamics includes equilibrium thermodynamics. It also considers: (a) systems considered in terms of cyclic sequences of processes

by which bodies pass from one equilibrium state to another by transfer of matter and energy between them. The term 'thermodynamic system' is used to refer to bodies of matter and energy in the special context of thermodynamics. The possible equilibria between bodies are determined by the physical properties of the walls that separate the bodies. Equilibrium thermodynamics in general does not measure time. Equilibrium thermodynamics is a relatively simple

175

176

CHAPTER 7. CHAPTER 7

SURROUNDINGS

SYSTEM

BOUNDARY

Reflections on the Motive Power of Fire studied what he called the working substance, e.g., typically a body of water vapor, in steam engines, in regards to the system’s ability to do work when heat is applied to it. The working substance could be put in contact with either a heat reservoir (a boiler), a cold reservoir (a stream of cold water), or a piston (to which the working body could do work by pushing on it). In 1850, the German physicist Rudolf Clausius generalized this picture to include the concept of the surroundings, and began referring to the system as a “working body.” In his 1850 manuscript On the Motive Power of Fire, Clausius wrote: The article Carnot heat engine shows the original pistonand-cylinder diagram used by Carnot in discussing his ideal engine; below, we see the Carnot engine as is typically modeled in current use:

and well settled subject. One reason for this is the existence of a well defined physical quantity called 'the entropy of a body'. Non-equilibrium thermodynamics, as a subject in physics, considers bodies of matter and energy that are not in states of internal thermodynamic equilibrium, but are usually participating in processes of transfer that are slow enough to allow description in terms of quantities that are closely related to thermodynamic state variables. It is characterized by presence of flows of matter and energy. For this topic, very often the bodies considered have smooth spatial inhomogeneities, so that spatial gradients, for example a temperature gradient, are well enough defined. Thus the description of non-equilibrium thermodynamic systems is a field theory, more complicated than the theory of equilibrium thermodynamics. Non-equilibrium thermodynamics is a growing subject, not an established edifice. In general, it is not possible to find an exactly defined entropy for non-equilibrium problems. For many non-equilibrium thermodynamical problems, an approximately defined quantity called 'time rate of entropy production' is very useful. Nonequilibrium thermodynamics is mostly beyond the scope of the present article.

Carnot engine diagram (modern) - where heat flows from a high temperature TH furnace through the fluid of the “working body” (working substance) and into the cold sink TC, thus forcing the working substance to do mechanical work W on the surroundings, via cycles of contractions and expansions.

Another kind of thermodynamic system is considered in engineering. It takes part in a flow process. The account is in terms that approximate, well enough in practice in many cases, equilibrium thermodynamical concepts. This is mostly beyond the scope of the present article, and is set out in other articles, for example the article Flow process.

In the diagram shown, the “working body” (system), a term introduced by Clausius in 1850, can be any fluid or vapor body through which heat Q can be introduced or transmitted through to produce work. In 1824, Sadi Carnot, in his famous paper Reflections on the Motive Power of Fire, had postulated that the fluid body could be any substance capable of expansion, such as vapor of water, vapor of alcohol, vapor of mercury, a permanent gas, or air, etc. Though, in these early years, engines came in a number of configurations, typically QH was supplied by a boiler, wherein water boiled over a furnace; QC was typically a stream of cold flowing water in the form of a condenser located on a separate part of the engine. The output work W was the movement of the piston as it turned a crank-arm, which typically turned a pulley to lift water out of flooded salt mines. Carnot defined work as “weight lifted through a height.”

7.1.2

7.1.3

History

Systems in equilibrium

The first to create the concept of a thermodynamic sys- At thermodynamic equilibrium, a system’s properties are, tem was the French physicist Sadi Carnot whose 1824 by definition, unchanging in time. Systems in equilib-

7.1. THERMODYNAMIC SYSTEM rium are much simpler and easier to understand than systems not in equilibrium. In some cases, when analyzing a thermodynamic process, one can assume that each intermediate state in the process is at equilibrium. This considerably simplifies the analysis.

177 contact, such as conduction of heat, or by long-range forces such as an electric field in the surroundings.

A system with walls that prevent all transfers is said to be isolated. This is an idealized conception, because in practice some transfer is always possible, for example by gravIn isolated systems it is consistently observed that as time itational forces. It is an axiom of thermodynamics that an goes on internal rearrangements diminish and stable condi- isolated system eventually reaches internal thermodynamic tions are approached. Pressures and temperatures tend to equilibrium, when its state no longer changes with time. equalize, and matter arranges itself into one or a few rela- The walls of a closed system allow transfer of energy as heat tively homogeneous phases. A system in which all processes and as work, but not of matter, between it and its surroundof change have gone practically to completion is considered ings. The walls of an open system allow transfer both of in a state of thermodynamic equilibrium. The thermody- matter and of energy.[12][13][14][15][16][17][18] This scheme of namic properties of a system in equilibrium are unchang- definition of terms is not uniformly used, though it is coning in time. Equilibrium system states are much easier to venient for some purposes. In particular, some writers use describe in a deterministic manner than non-equilibrium 'closed system' where 'isolated system' is here used.[19][20] states. Anything that passes across the boundary and effects a For a process to be reversible, each step in the process must change in the contents of the system must be accounted for be reversible. For a step in a process to be reversible, the in an appropriate balance equation. The volume can be the system must be in equilibrium throughout the step. That region surrounding a single atom resonating energy, such as ideal cannot be accomplished in practice because no step Max Planck defined in 1900; it can be a body of steam or can be taken without perturbing the system from equilib- air in a steam engine, such as Sadi Carnot defined in 1824. rium, but the ideal can be approached by making changes It could also be just one nuclide (i.e. a system of quarks) as slowly. hypothesized in quantum thermodynamics.

7.1.4

Walls

A system is enclosed by walls that bound it and connect it to its surroundings.[6][7][8][9][10][11] Often a wall restricts passage across it by some form of matter or energy, making the connection indirect. Sometimes a wall is no more than an imaginary two-dimensional closed surface through which the connection to the surroundings is direct.

7.1.5

Surroundings

See also: Environment (systems) The system is the part of the universe being studied, while the surroundings is the remainder of the universe that lies outside the boundaries of the system. It is also known as the environment, and the reservoir. Depending on the type of system, it may interact with the system by exchanging mass, energy (including heat and work), momentum, electric charge, or other conserved properties. The environment is ignored in analysis of the system, except in regards to these interactions.

A wall can be fixed (e.g. a constant volume reactor) or moveable (e.g. a piston). For example, in a reciprocating engine, a fixed wall means the piston is locked at its position; then, a constant volume process may occur. In that same engine, a piston may be unlocked and allowed to move in and out. Ideally, a wall may be declared adiabatic, diathermal, impermeable, permeable, or semi-permeable. 7.1.6 Closed system Actual physical materials that provide walls with such idealized properties are not always readily available. Main article: Closed system § In thermodynamics The system is delimited by walls or boundaries, either actual or notional, across which conserved (such as matter and energy) or unconserved (such as entropy) quantities can pass into and out of the system. The space outside the thermodynamic system is known as the surroundings, a reservoir, or the environment. The properties of the walls determine what transfers can occur. A wall that allows transfer of a quantity is said to be permeable to it, and a thermodynamic system is classified by the permeabilities of its several walls. A transfer between system and surroundings can arise by

In a closed system, no mass may be transferred in or out of the system boundaries. The system always contains the same amount of matter, but heat and work can be exchanged across the boundary of the system. Whether a system can exchange heat, work, or both is dependent on the property of its boundary. • Adiabatic boundary – not allowing any heat exchange: A thermally isolated system

178

CHAPTER 7. CHAPTER 7

• Rigid boundary – not allowing exchange of work: A particles. However, for systems undergoing a chemical remechanically isolated system action, there may be all sorts of molecules being generated and destroyed by the reaction process. In this case, the fact One example is fluid being compressed by a piston in a that the system is closed is expressed by stating that the tocylinder. Another example of a closed system is a bomb tal number of each elemental atom is conserved, no matter calorimeter, a type of constant-volume calorimeter used in what kind of molecule it may be a part of. Mathematically: measuring the heat of combustion of a particular reaction. Electrical energy travels across the boundary to produce m ∑ a spark between the electrodes and initiates combustion. aij Nj = b0i Heat transfer occurs across the boundary after combustion j=1 but no mass transfer takes place either way. where N is the number of j-type molecules, aᵢ is the numBeginning with the first law of thermodynamics for an open ber of atoms of element i in molecule j and bᵢ0 is the total system, this is expressed as: number of atoms of element i in the system, which remains constant, since the system is closed. There is one such equation for each element in the system. 1 1 ∆U = Q−W +mi (h+ v 2 +gz)i −me (h+ v 2 +gz)e 2 2 where U is internal energy, Q is the heat added to the system, W is the work done by the system, and since no mass is transferred in or out of the system, both expressions involving mass flow are zero and the first law of thermodynamics for a closed system is derived. The first law of thermodynamics for a closed system states that the increase of internal energy of the system equals the amount of heat added to the system minus the work done by the system. For infinitesimal changes the first law for closed systems is stated by:

dU = δQ − δW.

7.1.7

Isolated system

Main article: Isolated system An isolated system is more restrictive than a closed system as it does not interact with its surroundings in any way. Mass and energy remains constant within the system, and no energy or mass transfer takes place across the boundary. As time passes in an isolated system, internal differences in the system tend to even out and pressures and temperatures tend to equalize, as do density differences. A system in which all equalizing processes have gone practically to completion is in a state of thermodynamic equilibrium.

Truly isolated physical systems do not exist in reality (exIf the work is due to a volume expansion by dV at a pressure cept perhaps for the universe as a whole), because, for exP then: ample, there is always gravity between a system with mass and masses elsewhere.[21][22][23][24][25] However, real systems may behave nearly as an isolated system for finite (posδW = P dV. sibly very long) times. The concept of an isolated system can serve as a useful model approximating many real-world For a homogeneous system undergoing a reversible process, situations. It is an acceptable idealization used in constructthe second law of thermodynamics reads: ing mathematical models of certain natural phenomena. In the attempt to justify the postulate of entropy increase in the second law of thermodynamics, Boltzmann’s HδQ = T dS theorem used equations, which assumed that a system (for where T is the absolute temperature and S is the entropy example, a gas) was isolated. That is all the mechanical of the system. With these relations the fundamental ther- degrees of freedom could be specified, treating the walls modynamic relation, used to compute changes in internal simply as mirror boundary conditions. This inevitably led to Loschmidt’s paradox. However, if the stochastic behavenergy, is expressed as: ior of the molecules in actual walls is considered, along with the randomizing effect of the ambient, background thermal radiation, Boltzmann’s assumption of molecular chaos can dU = T dS − P dV. be justified. For a simple system, with only one type of particle (atom or The second law of thermodynamics for isolated systems molecule), a closed system amounts to a constant number of states that the entropy of an isolated system not in equi-

7.1. THERMODYNAMIC SYSTEM librium tends to increase over time, approaching maximum value at equilibrium. Overall, in an isolated system, the internal energy is constant and the entropy can never decrease. A closed system’s entropy can decrease e.g. when heat is extracted from the system.

179

7.1.9

Open system

In an open system, matter may pass in and out of some segments of the system boundaries. There may be other segments of the system boundaries that pass heat or work but not matter. Respective account is kept of the transfers It is important to note that isolated systems are not equivof energy across those and any other several boundary segalent to closed systems. Closed systems cannot exchange ments. In thermodynamic equilibrium, all flows have vanmatter with the surroundings, but can exchange energy. Isoished. lated systems can exchange neither matter nor energy with their surroundings, and as such are only theoretical and do not exist in reality (except, possibly, the entire universe). 7.1.10 See also It is worth noting that 'closed system' is often used in thermodynamics discussions when 'isolated system' would be correct - i.e. there is an assumption that energy does not enter or leave the system.

7.1.8

Selective transfer of matter

For a thermodynamic process, the precise physical properties of the walls and surroundings of the system are important, because they determine the possible processes. An open system has one or several walls that allow transfer of matter. To account for the internal energy of the open system, this requires energy transfer terms in addition to those for heat and work. It also leads to the idea of the chemical potential.

• Physical system

7.1.11

References

[1] Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 0-88318-797-3, p. 20. [2] Tisza, L. (1966). Generalized Thermodynamics, M.I.T Press, Cambridge MA, p. 119. [3] Marsland, R. III, Brown, H.R., Valente, G. (2015). Time and irreversibility in axiomatic thermodynamics, Am. J. Phys., 83(7): 628–634. [4] Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 0-88318-797-3, p. 22.

A wall selectively permeable only to a pure substance can put the system in diffusive contact with a reservoir of that pure substance in the surroundings. Then a process is possible in which that pure substance is transferred between system and surroundings. Also, across that wall a contact equilibrium with respect to that substance is possible. By suitable thermodynamic operations, the pure substance reservoir can be dealt with as a closed system. Its internal energy and its entropy can be determined as functions of its temperature, pressure, and mole number.

[5] Eu, B.C. (2002). Generalized Thermodynamics. The Thermodynamics of Irreversible Processes and Generalized Hydrodynamics, Kluwer Academic Publishers, Dordrecht, ISBN 1-4020-0788-4.

A thermodynamic operation can render impermeable to matter all system walls other than the contact equilibrium wall for that substance. This allows the definition of an intensive state variable, with respect to a reference state of the surroundings, for that substance. The intensive variable is called the chemical potential; for component substance i it is usually denoted μi. The corresponding extensive variable can be the number of moles Ni of the component substance in the system.

[9] Adkins, C.J. (1968/1975), p. 4

[6] Born, M. (1949). Natural Philosophy of Cause and Chance, Oxford University Press, London, p.44 [7] Tisza, L. (1966), pp. 109, 112. [8] Haase, R. (1971), p. 7.

[10] Callen, H.B. (1960/1985), pp. 15, 17. [11] Tschoegl, N.W. (2000), p. 5. [12] Prigogine, I., Defay, R. (1950/1954). Chemical Thermodynamics, Longmans, Green & Co, London, p. 66. [13] Tisza, L. (1966). Generalized Thermodynamics, M.I.T Press, Cambridge MA, pp. 112–113.

For a contact equilibrium across a wall permeable to a sub- [14] Guggenheim, E.A. (1949/1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, (1st edition stance, the chemical potentials of the substance must be 1949) 5th edition 1967, North-Holland, Amsterdam, p. 14. same on either side of the wall. This is part of the nature of thermodynamic equilibrium, and may be regarded as re- [15] Münster, A. (1970). Classical Thermodynamics, translated by E.S. Halberstadt, Wiley–Interscience, London, pp. 6–7. lated to the zeroth law of thermodynamics.[26]

180

[16] Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73– 117081, p. 3. [17] Tschoegl, N.W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics, Elsevier, Amsterdam, ISBN 0-444-50426-5, p. 5. [18] Silbey, R.J., Alberty, R.A., Bawendi, M.G. (1955/2005). Physical Chemistry, fourth edition, Wiley, Hoboken NJ, p. 4. [19] Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, ISBN 0-471-86256-8, p. 17. [20] ter Haar, D., Wergeland, H. (1966). Elements of Thermodynamics, Addison-Wesley Publishing, Reading MA, p. 43. [21] I.M.Kolesnikov; V.A.Vinokurov; S.I.Kolesnikov (2001). Thermodynamics of Spontaneous and Non-Spontaneous Processes. Nova science Publishers. p. 136. ISBN 1-56072904-X. [22] “A System and Its Surroundings”. ChemWiki. University of California - Davis. Retrieved May 2012. [23] “Hyperphysics”. The Department of Physics and Astronomy of Georgia State University. Retrieved May 2012. [24] Bryan Sanctuary. “Open, Closed and Isolated Systems in Physical Chemistry,”. Foundations of Quantum Mechanics and Physical Chemistry. McGill University (Montreal). Retrieved May 2012. [25] Material and Energy Balances for Engineers and Environmentalists (PDF). Imperial College Press. p. 7. Retrieved May 2012. [26] Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 0-88318-797-3, pp. 19–23.

• Abbott, M.M.; van Hess, H.G. (1989). Thermodynamics with Chemical Applications (2nd ed.). McGraw Hill. • Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, ISBN 0-471-862568. • Halliday, David; Resnick, Robert; Walker, Jearl (2008). Fundamentals of Physics (8th ed.). Wiley. • Moran, Michael J.; Shapiro, Howard N. (2008). Fundamentals of Engineering Thermodynamics (6th ed.). Wiley.

CHAPTER 7. CHAPTER 7

Chapter 8

Chapter 8. Material Properties 8.1 Heat capacity

be used to quantitatively predict the specific heat capacity of simple systems.

Heat capacity or thermal capacity is a measurable physical quantity equal to the ratio of the heat added to (or removed from) an object to the resulting temperature change.[1] The SI unit of heat capacity is joule per kelvin KJ and the dimensional form is L2 MT−2 Θ−1 . Specific heat is the amount of heat needed to raise the temperature of one gram of mass by 1 degree Celsius. Heat capacity is an extensive property of matter, meaning it is proportional to the size of the system. When expressing the same phenomenon as an intensive property, the heat capacity is divided by the amount of substance, mass, or volume, so that the quantity is independent of the size or extent of the sample. The molar heat capacity is the heat capacity per unit amount (SI unit: mole) of a pure substance and the specific heat capacity, often simply called specific heat, is the heat capacity per unit mass of a material. Occasionally, in engineering contexts, the volumetric heat capacity is used.

8.1.1

History

Main article: History of heat In a previous theory of heat common in the early modern period, heat was thought to be a measurement of an invisible fluid, known as the caloric. Bodies were capable of holding a certain amount of this fluid, hence the term heat capacity, named and first investigated by Scottish chemist Joseph Black in the 1750s.[5]

Since the development of thermodynamics during the 18th and 19th centuries, scientists have abandoned the idea of a physical caloric, and instead understand heat as changes in a system’s internal energy. That is, heat is no longer considered a fluid; rather, heat is a transfer of disordered energy. Nevertheless, at least in English, the term “heat capacity” survives. In some other languages, the term thermal capacTemperature reflects the average randomized kinetic energy ity is preferred, and it is also sometimes used in English. of constituent particles of matter (e.g. atoms or molecules) relative to the centre of mass of the system, while heat is the transfer of energy across a system boundary into the body 8.1.2 Units other than by work or matter transfer. Translation, rotation, and vibration of atoms represent the degrees of freedom of Extensive properties motion which classically contribute to the heat capacity of gases, while only vibrations are needed to describe the heat In the International System of Units, heat capacity has the capacities of most solids[2] , as shown by the Dulong–Petit unit joules per kelvin. An object’s heat capacity (symbol C) law. Other contributions can come from magnetic[3] and is defined as the ratio of the amount of heat energy transelectronic[4] degrees of freedom in solids, but these rarely ferred to an object and the resulting increase in temperature of the object, make substantial contributions. For quantum mechanical reasons, at any given temperature, some of these degrees of freedom may be unavailable, or only partially available, to store thermal energy. In such cases, the specific heat capacity is a fraction of the maximum. As the temperature approaches absolute zero, the specific heat capacity of a system approaches zero, due to loss of available degrees of freedom. Quantum theory can

Q , ∆T assuming that the temperature range is sufficiently small so that the heat capacity is constant. More generally, because heat capacity does depend upon temperature, it should be written as C=

181

182

C(T ) =

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

δQ , dT

where the symbol δ is used to imply that heat is a path function. Heat capacity is an extensive property, meaning it depends on the extent or size of the physical system in question. A sample containing twice the amount of substance as another sample requires the transfer of twice the amount of heat ( Q ) to achieve the same change in temperature ( ∆T ). Intensive properties For many experimental and theoretical purposes it is more convenient to report heat capacity as an intensive property – an intrinsic characteristic of a particular substance. This is most often accomplished by expressing the property in relation to a unit of mass. In science and engineering, such properties are often prefixed with the term specific.[6] International standards now recommend that specific heat capacity always refer to division by mass.[7] The units for the J specific heat capacity are [c] = kg·K . In chemistry, heat capacity is often specified relative to one mole, the unit of amount of substance, and is called the moJ lar heat capacity. It has the unit [Cmol ] = mol·K . For some considerations it is useful to specify the volumespecific heat capacity, commonly called volumetric heat capacity, which is the heat capacity per unit volume and has SI units [s] = m3J·K . This is used almost exclusively for liquids and solids, since for gases it may be confused with specific heat capacity at constant volume. Alternative unit systems While SI units are the most widely used, some countries and industries also use other systems of measurement. One older unit of heat is the kilogram-calorie (Cal), originally defined as the energy required to raise the temperature of one kilogram of water by one degree Celsius, typically from 14.5 to 15.5 °C. The specific average heat capacity of water on this scale would therefore be exactly 1 Cal/(C°⋅kg). However, due to the temperature-dependence of the specific heat, a large number of different definitions of the calorie came into being. Whilst once it was very prevalent, especially its smaller cgs variant the gram-calorie (cal), defined so that the specific heat of water would be 1 cal/(K⋅g), in most fields the use of the calorie is now archaic. In the United States other units of measure for heat capacity may be quoted in disciplines such as construction, civil engineering, and chemical engineering. A still common system is the English Engineering Units in which the mass ref-

erence is pound mass and the temperature is specified in degrees Fahrenheit or Rankine. One (rare) unit of heat is the pound calorie (lb-cal), defined as the amount of heat required to raise the temperature of one pound of water by one degree Celsius. On this scale the specific heat of water would be 1 lb-cal/(K⋅lbm). More common is the British thermal unit, the standard unit of heat in the U.S. construction industry. This is defined such that the specific heat of water is 1 BTU/(F°⋅lb).

8.1.3

Measurement of heat capacity

It may appear that the way to measure heat capacity is to add a known amount of heat to an object, and measure the change in temperature. This works reasonably well for many solids. However, for precise measurements, and especially for gasses, other aspects of measurement become critical. The heat capacity can be affected by many of the state variables that describe the thermodynamic system under study. These include the starting and ending temperature, as well as the pressure and the volume of the system before and after heat is added. So rather than a single way to measure heat capacity, there are actually several slightly different measurements of heat capacity. The most commonly used methods for measurement are to hold the object either at constant pressure (CP) or at constant volume (CV). Gases and liquids are typically also measured at constant volume. Measurements under constant pressure produce larger values than those at constant volume because the constant pressure values also include heat energy that is used to do work to expand the substance against the constant pressure as its temperature increases. This difference is particularly notable in gases where values under constant pressure are typically 30% to 66.7% greater than those at constant volume. Hence the heat capacity ratio of gases is typically between 1.3 and 1.67.[8] The specific heat capacities of substances comprising molecules (as distinct from monatomic gases) are not fixed constants and vary somewhat depending on temperature. Accordingly, the temperature at which the measurement is made is usually also specified. Examples of two common ways to cite the specific heat of a substance are as follows:[9] • Water (liquid): CP = 4185.5 J/(kg⋅K) (15 °C, 101.325 kPa) • Water (liquid): CVH = 74.539 J/(mol⋅K) (25 °C) For liquids and gases, it is important to know the pressure to which given heat capacity data refer. Most published data are given for standard pressure. However, quite different standard conditions for temperature and pressure have been

8.1. HEAT CAPACITY defined by different organizations. The International Union of Pure and Applied Chemistry (IUPAC) changed its recommendation from one atmosphere to the round value 100 kPa (≈750.062 Torr).[notes 1]

183

H = U + PV . A small change in the enthalpy can be expressed as

Calculation from first principles dH = δQ + V dP , The path integral Monte Carlo method is a numerical approach for determining the values of heat capacity, based on quantum dynamical principles. However, good approximations can be made for gases in many states using simpler methods outlined below. For many solids composed of relatively heavy atoms (atomic number > iron), at noncryogenic temperatures, the heat capacity at room temperature approaches 3R = 24.94 joules per kelvin per mole of atoms (Dulong–Petit law, R is the gas constant). Low temperature approximations for both gases and solids at temperatures less than their characteristic Einstein temperatures or Debye temperatures can be made by the methods of Einstein and Debye discussed below.

and therefore, at constant pressure, we have (

∂H ∂T

)

( =

P

∂Q ∂T

) = CP . P

These two equations: ( (

∂U ∂T ∂H ∂T

(

) = V

)

( =

P

∂Q ∂T ∂Q ∂T

) = CV . V

)

= CP . P

Thermodynamic relations and definition of heat capac- are property relations and are therefore independent of the ity type of process. In other words, they are valid for any substance going through any process. Both the internal energy The internal energy of a closed system changes either by and enthalpy of a substance can change with the transfer of adding heat to the system or by the system performing work. energy in many forms i.e., heat.[10] Written mathematically we have Relation between heat capacities ∆esystem = ein − eout

Main article: Relations between heat capacities

Or Measuring the heat capacity, sometimes referred to as specific heat, at constant volume can be prohibitively difficult dU = δQ − δW . for liquids and solids. That is, small temperature changes For work as a result of an increase of the system volume we typically require large pressures to maintain a liquid or solid at constant volume implying the containing vessel must be may write, nearly rigid or at least very strong (see coefficient of thermal expansion and compressibility). Instead it is easier to measure the heat capacity at constant pressure (allowing the dU = δQ − P dV . material to expand or contract freely) and solve for the heat If the heat is added at constant volume, then the second term capacity at constant volume using mathematical relationships derived from the basic thermodynamic laws. Startof this relation vanishes and one readily obtains ing from the fundamental thermodynamic relation one can show ( ) ( ) ∂U ∂Q = = CV . ∂T V ∂T V ) ( ) ( ∂V ∂P This defines the heat capacity at constant volume, CV, which CP − CV = T ∂T ∂T P,n V,n is also related to changes in internal energy. Another useful quantity is the heat capacity at constant pressure, CP. This where the partial derivatives are taken at constant volume quantity refers to the change in the enthalpy of the system, and constant number of particles, and constant pressure and which is given by constant number of particles, respectively.

184

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

This can also be rewritten c = Em = CP − CV = V T

α2 βT

C C = , m ρV

where

where α is the coefficient of thermal expansion, βT is the isothermal compressibility.

C m

V The heat capacity ratio or adiabatic index is the ratio of the m heat capacity at constant pressure to heat capacity at con- ρ = V stant volume. It is sometimes also known as the isentropic For gases, and also for other materials under high pressures, expansion factor. there is need to distinguish between different boundary conditions for the processes under consideration (since values Ideal gas [11] For an ideal gas, evaluating the partial differ significantly between different conditions). Typical derivatives above according to the equation of state where processes for which a heat capacity may be defined include isobaric (constant pressure, dP = 0 ) or isochoric (constant R is the gas constant for an ideal gas volume, dV = 0 ) processes. The corresponding specific heat capacities are expressed as P V = nRT ) ( ) ∂V ∂P CP − CV = T ∂T V,n ∂T P,n ( ) nRT ∂P nR P = ⇒ = V ∂T V,n V ( ) ∂V nR nRT ⇒ = V = P ∂T P,n P

(

(

cP = ( cV =

∂C ∂m ∂C ∂m

) , P

) . V

From the results of the previous section, dividing through by the mass gives the relation

substituting α2 T cP − cV = . ρβT ) ( ) ( ) ( )( ) ( )( ) ( ∂P ∂V nR nR nRT nR nR T =T = = Pparameter=tonR AP related c is CV −1 , the volumetric heat ∂T V,n ∂T P,n V P V P capacity. In engineering practice, cV for solids or liquids often signifies a volumetric heat capacity, rather than a this equation reduces simply to Mayer's relation, constant-volume one. In such cases, the mass-specific heat capacity (specific heat) is often explicitly written with the subscript m , as cm . Of course, from the above relationCP,m − CV,m = R ships, for solids one writes Specific heat capacity The specific heat capacity of a material on a per mass basis is

c=

∂C , ∂m

which in the absence of phase transitions is equivalent to

cm =

C cvolumetric = . m ρ

For pure homogeneous chemical compounds with established molecular or molar mass or a molar quantity is established, heat capacity as an intensive property can be expressed on a per mole basis instead of a per mass basis by the following equations analogous to the per mass equations:

8.1. HEAT CAPACITY

( CP,m = ( CV,m =

∂C ∂n ∂C ∂n

185 dimensionless entropy per particle S ∗ = S/N k , measured in nats.

) P

)

C∗ =

dS ∗ d ln T

V

Alternatively, using base 2 logarithms, C * relates the basewhere n is the number of moles in the body or 2 logarithmic increase in temperature to the increase in the thermodynamic system. One may refer to such a per mole dimensionless entropy measured in bits.[12] quantity as molar heat capacity to distinguish it from specific heat capacity on a per mass basis. Heat capacity at absolute zero Polytropic heat capacity

From the definition of entropy

The polytropic heat capacity is calculated at processes if all the thermodynamic properties (pressure, volume, temperaT dS = δQ ture) change ( Ci,m =

∂C ∂n

)

the absolute entropy can be calculated by integrating from zero kelvins temperature to the final temperature T

∫ Tf ∫ Tf ∫ Tf δQ δQ dT dT The most important polytropic processes run between the S(Tf ) = = = . C(T ) T dT T T adiabatic and the isotherm functions, the polytropic index T =0 0 0 is between 1 and the adiabatic exponent (γ or κ) The heat capacity must be zero at zero temperature in order for the above integral not to yield an infinite absolute entropy, which would violate the third law of thermodyDimensionless heat capacity namics. One of the strengths of the Debye model is that (unlike the preceding Einstein model) it predicts the proper The dimensionless heat capacity of a material is mathematical form of the approach of heat capacity toward zero, as absolute zero temperature is approached. C C C∗ = = nR Nk Negative heat capacity (stars) where Most physical systems exhibit a positive heat capacity. However, even though it can seem paradoxical at first,[13][14] C is the heat capacity of a body made of the mathere are some systems for which the heat capacity is negaterial in question (J/K) tive. These are inhomogeneous systems which do not meet n is the amount of substance in the body (mol) the strict definition of thermodynamic equilibrium. They include gravitating objects such as stars, galaxies; and also R is the gas constant (J/(K⋅mol)) sometimes some nano-scale clusters of a few tens of atoms, N is the number of molecules in the body. (diclose to a phase transition.[15] A negative heat capacity can mensionless) result in a negative temperature. k is Boltzmann’s constant (J/(K⋅molecule)) According to the virial theorem, for a self-gravitating body like a star or an interstellar gas cloud, the average potential In the ideal gas article, dimensionless heat capacity C ∗ is energy UPₒ and the average kinetic energy UKᵢ are locked expressed as cˆ , and is related there directly to half the num- together in the relation ber of degrees of freedom per particle. This holds true for quadratic degrees of freedom, a consequence of the equipartition theorem. UPot = −2UKin , More generally, the dimensionless heat capacity relates the logarithmic increase in temperature to the increase in the The total energy U (= UPₒ + UKᵢ ) therefore obeys

186

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

U = −UKin , If the system loses energy, for example by radiating energy away into space, the average kinetic energy actually increases. If a temperature is defined by the average kinetic energy, then the system therefore can be said to have a negative heat capacity.[16] A more extreme version of this occurs with black holes. According to black hole thermodynamics, the more mass and energy a black hole absorbs, the colder it becomes. In contrast, if it is a net emitter of energy, through Hawking radiation, it will become hotter and hotter until it boils away.

8.1.4

Theory of heat capacity

Factors that affect specific heat capacity

the resulting specific heat capacity is a function of the structure of the substance itself. In particular, it depends on the number of degrees of freedom that are available to the particles in the substance; each independent degree of freedom allows the particles to store thermal energy. The translational kinetic energy of substance particles is only one of the many possible degrees of freedom which manifests as temperature change, and thus the larger the number of degrees of freedom available to the particles of a substance other than translational kinetic energy, the larger will be the specific heat capacity for the substance. For example, rotational kinetic energy of gas molecules stores heat energy in a way that increases heat capacity, since this energy does not contribute to temperature. In addition, quantum effects require that whenever energy be stored in any mechanism associated with a bound system which confers a degree of freedom, it must be stored in certain minimal-sized deposits (quanta) of energy, or else not stored at all. Such effects limit the full ability of some degrees of freedom to store energy when their lowest energy storage quantum amount is not easily supplied at the average energy of particles at a given temperature. In general, for this reason, specific heat capacities tend to fall at lower temperatures where the average thermal energy available to each particle degree of freedom is smaller, and thermal energy storage begins to be limited by these quantum effects. Due to this process, as temperature falls toward absolute zero, so also does heat capacity.

Degrees of freedom Main article: degrees of freedom (physics and chemistry)

Molecules undergo many characteristic internal vibrations. Potential energy stored in these internal degrees of freedom contributes to a sample’s energy content, [17] [18] but not to its temperature. More internal degrees of freedom tend to increase a substance’s specific heat capacity, so long as temperatures are high enough to overcome quantum effects.

Molecules are quite different from the monatomic gases like helium and argon. With monatomic gases, thermal energy comprises only translational motions. Translational motions are ordinary, whole-body movements in 3D space whereby particles move about and exchange energy in collisions—like rubber balls in a vigorously shaken container (see animation here [19] ). These simple movements in the three dimensions of space mean individual atoms have three translational degrees of freedom. A degree of freedom is any form of energy in which heat transferred into an object can be stored. This can be in translational kinetic energy, rotational kinetic energy, or other forms such as potential energy in vibrational modes. Only three translational degrees of freedom (corresponding to the three independent directions in space) are available for any individual atom, whether it is free, as a monatomic molecule, or bound into a polyatomic molecule.

For any given substance, the heat capacity of a body is directly proportional to the amount of substance it contains (measured in terms of mass or moles or volume). Doubling the amount of substance in a body doubles its heat capacity, etc. As to rotation about an atom’s axis (again, whether the atom However, when this effect has been corrected for, by divid- is bound or free), its energy of rotation is proportional to the ing the heat capacity by the quantity of substance in a body, moment of inertia for the atom, which is extremely small

8.1. HEAT CAPACITY compared to moments of inertia of collections of atoms. This is because almost all of the mass of a single atom is concentrated in its nucleus, which has a radius too small to give a significant moment of inertia. In contrast, the spacing of quantum energy levels for a rotating object is inversely proportional to its moment of inertia, and so this spacing becomes very large for objects with very small moments of inertia. For these reasons, the contribution from rotation of atoms on their axes is essentially zero in monatomic gases, because the energy spacing of the associated quantum levels is too large for significant thermal energy to be stored in rotation of systems with such small moments of inertia. For similar reasons, axial rotation around bonds joining atoms in diatomic gases (or along the linear axis in a linear molecule of any length) can also be neglected as a possible “degree of freedom” as well, since such rotation is similar to rotation of monatomic atoms, and so occurs about an axis with a moment of inertia too small to be able to store significant heat energy. In polyatomic molecules, other rotational modes may become active, due to the much higher moments of inertia about certain axes which do not coincide with the linear axis of a linear molecule. These modes take the place of some translational degrees of freedom for individual atoms, since the atoms are moving in 3-D space, as the molecule rotates. The narrowing of quantum mechanically determined energy spacing between rotational states results from situations where atoms are rotating around an axis that does not connect them, and thus form an assembly that has a large moment of inertia. This small difference between energy states allows the kinetic energy of this type of rotational motion to store heat energy at ambient temperatures. Furthermore, internal vibrational degrees of freedom also may become active (these are also a type of translation, as seen from the view of each atom). In summary, molecules are complex objects with a population of atoms that may move about within the molecule in a number of different ways (see animation at right), and each of these ways of moving is capable of storing energy if the temperature is sufficient. The heat capacity of molecular substances (on a “per-atom” or atom-molar, basis) does not exceed the heat capacity of monatomic gases, unless vibrational modes are brought into play. The reason for this is that vibrational modes allow energy to be stored as potential energy in intra-atomic bonds in a molecule, which are not available to atoms in monatomic gases. Up to about twice as much energy (on a per-atom basis) per unit of temperature increase can be stored in a solid as in a monatomic gas, by this mechanism of storing energy in the potentials of interatomic bonds. This gives many solids about twice the atom-molar heat capacity at room temperature of monatomic gases.

187 perature of the solid), especially in solids with light and tightly bound atoms (e.g., beryllium metal or diamond). Polyatomic gases store intermediate amounts of energy, giving them a “per-atom” heat capacity that is between that of monatomic gases (3 ⁄2 R per mole of atoms, where R is the ideal gas constant), and the maximum of fully excited warmer solids (3 R per mole of atoms). For gases, heat capacity never falls below the minimum of 3 ⁄2 R per mole (of molecules), since the kinetic energy of gas molecules is always available to store at least this much thermal energy. However, at cryogenic temperatures in solids, heat capacity falls toward zero, as temperature approaches absolute zero.

Example of temperature-dependent specific heat capacity, in a diatomic gas To illustrate the role of various degrees of freedom in storing heat, we may consider nitrogen, a diatomic molecule that has five active degrees of freedom at room temperature: the three comprising translational motion plus two rotational degrees of freedom internally. Although the constant-volume molar heat capacity of nitrogen at this temperature is five-thirds that of monatomic gases, on a per-mole of atoms basis, it is five-sixths that of a monatomic gas. The reason for this is the loss of a degree of freedom due to the bond when it does not allow storage of thermal energy. Two separate nitrogen atoms would have a total of six degrees of freedom—the three translational degrees of freedom of each atom. When the atoms are bonded the molecule will still only have three translational degrees of freedom, as the two atoms in the molecule move as one. However, the molecule cannot be treated as a point object, and the moment of inertia has increased sufficiently about two axes to allow two rotational degrees of freedom to be active at room temperature to give five degrees of freedom. The moment of inertia about the third axis remains small, as this is the axis passing through the centres of the two atoms, and so is similar to the small moment of inertia for atoms of a monatomic gas. Thus, this degree of freedom does not act to store heat, and does not contribute to the heat capacity of nitrogen. The heat capacity per atom for nitrogen (5/2 R per mole molecules = 5/4 R per mole atoms) is therefore less than for a monatomic gas (3/2 R per mole molecules or atoms), so long as the temperature remains low enough that no vibrational degrees of freedom are activated.[20]

At higher temperatures, however, nitrogen gas gains one more degree of internal freedom, as the molecule is excited into higher vibrational modes that store thermal energy. A vibrational degree of freedom contributes a heat capacity of 1/2 R each for kinetic and potential energy, for a total of R. Now the bond is contributing heat capacity, and (because of storage of energy in potential energy) is contributing more than if the atoms were not bonded. With However, quantum effects heavily affect the actual ratio at full thermal excitation of bond vibration, the heat capaclower temperatures (i.e., much lower than the melting tem- ity per volume, or per mole of gas molecules approaches

188 seven-thirds that of monatomic gases. Significantly, this is seven-sixths of the monatomic gas value on a mole-ofatoms basis, so this is now a higher heat capacity per atom than the monatomic figure, because the vibrational mode enables for diatomic gases allows an extra degree of potential energy freedom per pair of atoms, which monatomic gases cannot possess.[21][22] See thermodynamic temperature for more information on translational motions, kinetic (heat) energy, and their relationship to temperature. However, even at these large temperatures where gaseous nitrogen is able to store 7/6ths of the energy per atom of a monatomic gas (making it more efficient at storing energy on an atomic basis), it still only stores 7/12 ths of the maximal per-atom heat capacity of a solid, meaning it is not nearly as efficient at storing thermal energy on an atomic basis, as solid substances can be. This is typical of gases, and results because many of the potential bonds which might be storing potential energy in gaseous nitrogen (as opposed to solid nitrogen) are lacking, because only one of the spatial dimensions for each nitrogen atom offers a bond into which potential energy can be stored without increasing the kinetic energy of the atom. In general, solids are most efficient, on an atomic basis, at storing thermal energy (that is, they have the highest per-atom or per-mole-of-atoms heat capacity). Per mole of different units Per mole of molecules When the specific heat capacity, c, of a material is measured (lowercase c means the unit quantity is in terms of mass), different values arise because different substances have different molar masses (essentially, the weight of the individual atoms or molecules). In solids, thermal energy arises due to the number of atoms that are vibrating. “Molar” heat capacity per mole of molecules, for both gases and solids, offer figures which are arbitrarily large, since molecules may be arbitrarily large. Such heat capacities are thus not intensive quantities for this reason, since the quantity of mass being considered can be increased without limit.

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES at high temperatures. This relationship was noticed empirically in 1819, and is called the Dulong–Petit law, after its two discoverers.[23] Historically, the fact that specific heat capacities are approximately equal when corrected by the presumed weight of the atoms of solids, was an important piece of data in favor of the atomic theory of matter. Because of the connection of heat capacity to the number of atoms, some care should be taken to specify a mole-ofmolecules basis vs. a mole-of-atoms basis, when comparing specific heat capacities of molecular solids and gases. Ideal gases have the same numbers of molecules per volume, so increasing molecular complexity adds heat capacity on a per-volume and per-mole-of-molecules basis, but may lower or raise heat capacity on a per-atom basis, depending on whether the temperature is sufficient to store energy as atomic vibration. In solids, the quantitative limit of heat capacity in general is about 3 R per mole of atoms, where R is the ideal gas constant. This 3 R value is about 24.9 J/mole.K. Six degrees of freedom (three kinetic and three potential) are available to each atom. Each of these six contributes 1 ⁄2 R specific heat capacity per mole of atoms.[24] This limit of 3 R per mole specific heat capacity is approached at room temperature for most solids, with significant departures at this temperature only for solids composed of the lightest atoms which are bound very strongly, such as beryllium (where the value is only of 66% of 3 R), or diamond (where it is only 24% of 3 R). These large departures are due to quantum effects which prevent full distribution of heat into all vibrational modes, when the energy difference between vibrational quantum states is very large compared to the average energy available to each atom from the ambient temperature. For monatomic gases, the specific heat is only half of 3 R per mole, i.e. (3 ⁄2 R per mole) due to loss of all potential energy degrees of freedom in these gases. For polyatomic gases, the heat capacity will be intermediate between these values on a per-mole-of-atoms basis, and (for heat-stable molecules) would approach the limit of 3 R per mole of atoms, for gases composed of complex molecules, and at higher temperatures at which all vibrational modes accept excitational energy. This is because very large and complex gas molecules may be thought of as relatively large blocks of solid matter which have lost only a relatively small fraction of degrees of freedom, as compared to a fully integrated solid.

Per mole of atoms Conversely, for molecular-based substances (which also absorb heat into their internal degrees of freedom), massive, complex molecules with high atomic count—like octane—can store a great deal of energy per mole and yet are quite unremarkable on a mass basis, or on a per-atom basis. This is because, in fully excited systems, For a list of heat capacities per atom-mole of various subheat is stored independently by each atom in a substance, stances, in terms of R, see the last column of the table of heat capacities below. not primarily by the bulk motion of molecules. Thus, it is the heat capacity per-mole-of-atoms, not permole-of-molecules, which is the intensive quantity, and Corollaries of these considerations for solids (volumewhich comes closest to being a constant for all substances specific heat capacity) Since the bulk density of a solid

8.1. HEAT CAPACITY chemical element is strongly related to its molar mass (usually about 3 R per mole, as noted above), there exists a noticeable inverse correlation between a solid’s density and its specific heat capacity on a per-mass basis. This is due to a very approximate tendency of atoms of most elements to be about the same size, and constancy of mole-specific heat capacity) result in a good correlation between the volume of any given solid chemical element and its total heat capacity. Another way of stating this, is that the volume-specific heat capacity (volumetric heat capacity) of solid elements is roughly a constant. The molar volume of solid elements is very roughly constant, and (even more reliably) so also is the molar heat capacity for most solid substances. These two factors determine the volumetric heat capacity, which as a bulk property may be striking in consistency. For example, the element uranium is a metal which has a density almost 36 times that of the metal lithium, but uranium’s specific heat capacity on a volumetric basis (i.e. per given volume of metal) is only 18% larger than lithium’s.

189 Impurities In the case of alloys, there are several conditions in which small impurity concentrations can greatly affect the specific heat. Alloys may exhibit marked difference in behaviour even in the case of small amounts of impurities being one element of the alloy; for example impurities in semiconducting ferromagnetic alloys may lead to quite different specific heat properties.[25] The simple case of the monatomic gas

In the case of a monatomic gas such as helium under constant volume, if it is assumed that no electronic or nuclear quantum excitations occur, each atom in the gas has only 3 degrees of freedom, all of a translational type. No energy dependence is associated with the degrees of freedom which define the position of the atoms. While, in fact, the degrees of freedom corresponding to the momenta of the atoms are quadratic, and thus contribute to the heat capacity. There are N atoms, each of which has 3 components Since the volume-specific corollary of the Dulong–Petit of momentum, which leads to 3N total degrees of freedom. specific heat capacity relationship requires that atoms of all This gives: elements take up (on average) the same volume in solids, there are many departures from it, with most of these due ( ) to variations in atomic size. For instance, arsenic, which is ∂U 3 3 C = = N kB = n R V only 14.5% less dense than antimony, has nearly 59% more ∂T V 2 2 specific heat capacity on a mass basis. In other words; even though an ingot of arsenic is only about 17% larger than CV 3 an antimony one of the same mass, it absorbs about 59% CV,m = n = 2 R more heat for a given temperature rise. The heat capacwhere ity ratios of the two substances closely follows the ratios of their molar volumes (the ratios of numbers of atoms in the CV is the heat capacity at constant volume of the same volume of each substance); the departure from the gas correlation to simple volumes in this case is due to lighter arsenic atoms being significantly more closely packed than CV,m is the molar heat capacity at constant volantimony atoms, instead of similar size. In other words, ume of the gas similar-sized atoms would cause a mole of arsenic to be N is the total number of atoms present in the con63% larger than a mole of antimony, with a correspondingly tainer lower density, allowing its volume to more closely mirror its n is the number of moles of atoms present in heat capacity behavior. the container (n is the ratio of N and Avogadro’s number) Other factors

Hydrogen bonds Hydrogen-containing polar molecules like ethanol, ammonia, and water have powerful, intermolecular hydrogen bonds when in their liquid phase. These bonds provide another place where heat may be stored as potential energy of vibration, even at comparatively low temperatures. Hydrogen bonds account for the fact that liquid water stores nearly the theoretical limit of 3 R per mole of atoms, even at relatively low temperatures (i.e. near the freezing point of water).

R is the ideal gas constant, (8.3144621[75] J/(mol⋅K). R is equal to the product of Boltzmann’s constant kB and Avogadro’s number The following table shows experimental molar constant volume heat capacity measurements taken for each noble monatomic gas (at 1 atm and 25 °C): It is apparent from the table that the experimental heat capacities of the monatomic noble gases agrees with this simple application of statistical mechanics to a very high degree.

190

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

The molar heat capacity of a monatomic gas at constant inertia about the internuclear axis is vanishingly small relapressure is then tive to the other two rotational axes, the energy spacing can be considered so high that no excitations of the rotational state can occur unless the temperature is extremely high. It 5 is easy to calculate the expected number of vibrational deCp,m = CV,m + R = R grees of freedom (or vibrational modes). There are three 2 degrees of translational freedom, and two degrees of rotational freedom, therefore Diatomic gas fvib = f − ftrans − frot = 6 − 3 − 2 = 1 Each rotational and translational degree of freedom will contribute R/2 in the total molar heat capacity of the gas. Each vibrational mode will contribute R to the total molar heat capacity, however. This is because for each vibrational mode, there is a potential and kinetic energy component. Both the potential and kinetic components will contribute R/2 to the total molar heat capacity of the gas. Therefore, a diatomic molecule would be expected to have a molar constant-volume heat capacity of

CV,m = Constant volume specific heat capacity of a diatomic gas (idealised). As temperature increases, heat capacity goes from 3/2 R (translation contribution only), to 5/2 R (translation plus rotation), finally to a maximum of 7/2 R (translation + rotation + vibration)

3R 7R +R+R= = 3.5R 2 2

where the terms originate from the translational, rotational, and vibrational degrees of freedom, respectively.

The following is a table of some molar constant-volume heat capacities of various diatomic gases at standard temperature In the somewhat more complex case of an ideal gas of (25 °C = 298 K) diatomic molecules, the presence of internal degrees of freedom are apparent. In addition to the three translational From the above table, clearly there is a problem with degrees of freedom, there are rotational and vibrational de- the above theory. All of the diatomics examined have grees of freedom. In general, the number of degrees of heat capacities that are lower than those predicted by the equipartition theorem, except Br2 . However, as the atoms freedom, f, in a molecule with na atoms is 3na: composing the molecules become heavier, the heat capacities move closer to their expected values. One of the reasons for this phenomenon is the quantization of vibrational, and f = 3na to a lesser extent, rotational states. In fact, if it is assumed Mathematically, there are a total of three rotational degrees that the molecules remain in their lowest energy vibrational of freedom, one corresponding to rotation about each of the state because the inter-level energy spacings for vibrationaxes of three-dimensional space. However, in practice only energies are large, the predicted molar constant volume heat the existence of two degrees of rotational freedom for linear capacity for a diatomic molecule becomes just that from the molecules will be considered. This approximation is valid contributions of translation and rotation: because the moment of inertia about the internuclear axis is vanishingly small with respect to other moments of inertia in the molecule (this is due to the very small rotational moments of single atoms, due to the concentration of almost all their mass at their centers; compare also the extremely small radii of the atomic nuclei compared to the distance between them in a diatomic molecule). Quantum mechanically, it can be shown that the interval between successive rotational energy eigenstates is inversely proportional to the moment of inertia about that axis. Because the moment of

CV,m =

3R 5R +R= = 2.5R 2 2

which is a fairly close approximation of the heat capacities of the lighter molecules in the above table. If the quantum harmonic oscillator approximation is made, it turns out that the quantum vibrational energy level spacings are actually inversely proportional to the square root of the reduced mass of the atoms composing the diatomic

8.1. HEAT CAPACITY

191 In addition, a molecule may have rotational motion. The kinetic energy of rotational motion is generally expressed as

E=

Constant volume specific heat capacity of diatomic gases (real gases) between about 200 K and 2000 K. This temperature range is not large enough to include both quantum transitions in all gases. Instead, at 200 K, all but hydrogen are fully rotationally excited, so all have at least 5/2 R heat capacity. (Hydrogen is already below 5/2, but it will require cryogenic conditions for even H2 to fall to 3/2 R). Further, only the heavier gases fully reach 7/2 R at the highest temperature, due to the relatively small vibrational energy spacing of these molecules. HCl and H2 begin to make the transition above 500 K, but have not achieved it by 1000 K, since their vibrational energy level spacing is too wide to fully participate in heat capacity, even at this temperature.

molecule. Therefore, in the case of the heavier diatomic molecules such as chlorine or bromine, the quantum vibrational energy level spacings become finer, which allows more excitations into higher vibrational levels at lower temperatures. This limit for storing heat capacity in vibrational modes, as discussed above, becomes 7R/2 = 3.5 R per mole of gas molecules, which is fairly consistent with the measured value for Br2 at room temperature. As temperatures rise, all diatomic gases approach this value. General gas phase The specific heat of the gas is best conceptualized in terms of the degrees of freedom of an individual molecule. The different degrees of freedom correspond to the different ways in which the molecule may store energy. The molecule may store energy in its translational motion according to the formula:

E=

) 1 ( 2 m vx + vy2 + vz2 2

) 1 ( I1 ω12 + I2 ω22 + I3 ω32 2

where I is the moment of inertia tensor of the molecule, and [ω1 , ω2 , ω3 ] is the angular velocity pseudo-vector (in a coordinate system aligned with the principal axes of the molecule). In general, then, there will be three additional degrees of freedom corresponding to the rotational motion of the molecule, (For linear molecules one of the inertia tensor terms vanishes and there are only two rotational degrees of freedom). The degrees of freedom corresponding to translations and rotations are called the rigid degrees of freedom, since they do not involve any deformation of the molecule. The motions of the atoms in a molecule which are not part of its gross translational motion or rotation may be classified as vibrational motions. It can be shown that if there are n atoms in the molecule, there will be as many as v = 3n−3− nr vibrational degrees of freedom, where nr is the number of rotational degrees of freedom. A vibrational degree of freedom corresponds to a specific way in which all the atoms of a molecule can vibrate. The actual number of possible vibrations may be less than this maximal one, due to various symmetries. For example, triatomic nitrous oxide N2 O will have only 2 degrees of rotational freedom (since it is a linear molecule) and contains n=3 atoms: thus the number of possible vibrational degrees of freedom will be v = (3⋅3) − 3 − 2 = 4. There are four ways or “modes” in which the three atoms can vibrate, corresponding to 1) A mode in which an atom at each end of the molecule moves away from, or towards, the center atom at the same time, 2) a mode in which either end atom moves asynchronously with regard to the other two, and 3) and 4) two modes in which the molecule bends out of line, from the center, in the two possible planar directions that are orthogonal to its axis. Each vibrational degree of freedom confers TWO total degrees of freedom, since vibrational energy mode partitions into 1 kinetic and 1 potential mode. This would give nitrous oxide 3 translational, 2 rotational, and 4 vibrational modes (but these last giving 8 vibrational degrees of freedom), for storing energy. This is a total of f = 3 + 2 + 8 = 13 total energy-storing degrees of freedom, for N2 O.

where m is the mass of the molecule and [vx , vy , vz ] is velocity of the center of mass of the molecule. Each direction For a bent molecule like water H2 O, a similar calculation of motion constitutes a degree of freedom, so that there are gives 9 − 3 − 3 = 3 modes of vibration, and 3 (translational) three translational degrees of freedom. + 3 (rotational) + 6 (vibrational) = 12 degrees of freedom.

192 The storage of energy into degrees of freedom If the molecule could be entirely described using classical mechanics, then the theorem of equipartition of energy could be used to predict that each degree of freedom would have an average energy in the amount of (1/2)kT where k is Boltzmann’s constant and T is the temperature. Our calculation of the constant-volume heat capacity would be straightforward. Each molecule would be holding, on average, an energy of (f/2)kT where f is the total number of degrees of freedom in the molecule. Note that Nk = R if N is Avogadro’s number, which is the case in considering the heat capacity of a mole of molecules. Thus, the total internal energy of the gas would be (f/2)NkT where N is the total number of molecules. The heat capacity (at constant volume) would then be a constant (f/ 2)Nk the mole-specific heat capacity would be (f/ 2)R the molecule-specific heat capacity would be (f/2)k and the dimensionless heat capacity would be just f/2. Here again, each vibrational degree of freedom contributes 2f. Thus, a mole of nitrous oxide would have a total constant-volume heat capacity (including vibration) of (13/2)R by this calculation.

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES If the temperature of the substance is so low that the equipartition energy of (1/2)kT is much smaller than this excitation energy, then there will be little or no energy in this degree of freedom. This degree of freedom is then said to be “frozen out”. As mentioned above, the temperature corresponding to the first excited vibrational state of HCl is about 4156 K. For temperatures well below this value, the vibrational degrees of freedom of the HCl molecule will be frozen out. They will contain little energy and will not contribute to the thermal energy or the heat capacity of HCl gas.

Energy storage mode “freeze-out” temperatures

It can be seen that for each degree of freedom there is a critical temperature at which the degree of freedom “unfreezes” and begins to accept energy in a classical way. In the case of translational degrees of freedom, this temperature is that temperature at which the thermal wavelength of the molecules is roughly equal to the size of the container. For a container of macroscopic size (e.g. 10 cm) this temIn summary, the molar heat capacity (mole-specific heat ca- perature is extremely small and has no significance, since pacity) of an ideal gas with f degrees of freedom is given the gas will certainly liquify or freeze before this low temby perature is reached. For any real gas translational degrees of freedom may be considered to always be classical and contain an average energy of (3/2)kT per molecule. f CV,m = R The rotational degrees of freedom are the next to “un2 freeze”. In a diatomic gas, for example, the critical temThis equation applies to all polyatomic gases, if the degrees perature for this transition is usually a few tens of kelvins, [26] of freedom are known. although with a very light molecule such as hydrogen the The constant-pressure heat capacity for any gas would ex- rotational energy levels will be spaced so widely that roceed this by an extra factor of R (see Mayer's relation, tational heat capacity may not completely “unfreeze” until above). As example C would be a total of (15/2)R/mole considerably higher temperatures are reached. Finally, the for nitrous oxide. vibrational degrees of freedom are generally the last to unfreeze. As an example, for diatomic gases, the critical temperature for the vibrational motion is usually a few thouThe effect of quantum energy levels in storing energy in sands of kelvins, and thus for the nitrogen in our example degrees of freedom at room temperature, no vibration modes would be excited, The various degrees of freedom cannot generally be consid- and the constant-volume heat capacity at room temperature ered to obey classical mechanics, however. Classically, the is (5/2)R/mole, not (7/2)R/mole. As seen above, with some energy residing in each degree of freedom is assumed to be unusually heavy gases such as iodine gas I2 , or bromine gas continuous—it can take on any positive value, depending on Br2 , some vibrational heat capacity may be observed even the temperature. In reality, the amount of energy that may at room temperatures. reside in a particular degree of freedom is quantized: It may only be increased and decreased in finite amounts. A good estimate of the size of this minimum amount is the energy of the first excited state of that degree of freedom above its ground state. For example, the first vibrational state of the hydrogen chloride (HCl) molecule has an energy of about 5.74 × 10−20 joule. If this amount of energy were deposited in a classical degree of freedom, it would correspond to a temperature of about 4156 K.

It should be noted that it has been assumed that atoms have no rotational or internal degrees of freedom. This is in fact untrue. For example, atomic electrons can exist in excited states and even the atomic nucleus can have excited states as well. Each of these internal degrees of freedom are assumed to be frozen out due to their relatively high excitation energy. Nevertheless, for sufficiently high temperatures, these degrees of freedom cannot be ignored. In a few exceptional cases, such molecular electronic transitions are of

8.1. HEAT CAPACITY

193

sufficiently low energy that they contribute to heat capac- Solid phase ity at room temperature, or even at cryogenic temperatures. One example of an electronic transition degree of freedom Main articles: Einstein solid, Debye model and Kinetic thewhich contributes heat capacity at standard temperature is ory of solids that of nitric oxide (NO), in which the single electron in an For matter in a crystalline solid phase, the Dulong–Petit anti-bonding molecular orbital has energy transitions which contribute to the heat capacity of the gas even at room temperature. An example of a nuclear magnetic transition degree of freedom which is of importance to heat capacity, is the transition which converts the spin isomers of hydrogen gas (H2 ) into each other. At room temperature, the proton spins of hydrogen gas are aligned 75% of the time, resulting in orthohydrogen when they are. Thus, some thermal energy has been stored in the degree of freedom available when parahydrogen (in which spins are anti-aligned) absorbs energy, and is converted to the higher energy ortho form. However, at the temperature of liquid hydrogen, not enough heat energy is available to produce orthohydrogen (that is, the transition energy between forms is large enough to “freeze out” at this low temperature), and thus the parahydrogen form predominates. The heat capacity of the transition is sufficient to release enough heat, as orthohydrogen converts to the lower-energy parahydrogen, to boil the hydrogen liquid to gas again, if this evolved heat is not removed with a catalyst after the gas has been cooled and condensed. This example also illustrates the fact that some modes of storage of heat may not be in constant equilibrium with each other in substances, and heat absorbed or released from such phase changes may “catch up” with temperature changes of substances, only after a certain time. In other words, the heat evolved and absorbed from the orthopara isomeric transition contributes to the heat capacity of hydrogen on long time-scales, but not on short time-scales. These time scales may also depend on the presence of a catalyst. Less exotic phase-changes may contribute to the heatcapacity of substances and systems, as well, as (for example) when water is converted back and forth from solid to liquid or gas form. Phase changes store heat energy entirely in breaking the bonds of the potential energy interactions between molecules of a substance. As in the case of hydrogen, it is also possible for phase changes to be hindered as the temperature drops, so that they do not catch up and become apparent, without a catalyst. For example, it is possible to supercool liquid water to below the freezing point, and not observe the heat evolved when the water changes to ice, so long as the water remains liquid. This heat appears instantly when the water freezes.

The dimensionless heat capacity divided by three, as a function of temperature as predicted by the Debye model and by Einstein’s earlier model. The horizontal axis is the temperature divided by the Debye temperature. Note that, as expected, the dimensionless heat capacity is zero at absolute zero, and rises to a value of three as the temperature becomes much larger than the Debye temperature. The red line corresponds to the classical limit of the Dulong–Petit law

law, which was discovered empirically, states that the molar heat capacity assumes the value 3 R. Indeed, for solid metallic chemical elements at room temperature, molar heat capacities range from about 2.8 R to 3.4 R. Large exceptions at the lower end involve solids composed of relatively lowmass, tightly bonded atoms, such as beryllium at 2.0 R, and diamond at only 0.735 R. The latter conditions create larger quantum vibrational energy spacing, so that many vibrational modes have energies too high to be populated (and thus are “frozen out”) at room temperature. At the higher end of possible heat capacities, heat capacity may exceed R by modest amounts, due to contributions from anharmonic vibrations in solids, and sometimes a modest contribution from conduction electrons in metals. These are not degrees of freedom treated in the Einstein or Debye theories. The theoretical maximum heat capacity for multi-atomic gases at higher temperatures, as the molecules become larger, also approaches the Dulong–Petit limit of 3 R, so long as this is calculated per mole of atoms, not molecules. The reason for this behavior is that, in theory, gases with very large molecules have almost the same hightemperature heat capacity as solids, lacking only the (small) heat capacity contribution that comes from potential energy that cannot be stored between separate molecules in a gas.

194

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

The Dulong–Petit limit results from the equipartition theorem, and as such is only valid in the classical limit of a microstate continuum, which is a high temperature limit. For light and non-metallic elements, as well as most of the common molecular solids based on carbon compounds at standard ambient temperature, quantum effects may also play an important role, as they do in multi-atomic gases. These effects usually combine to give heat capacities lower than 3 R per mole of atoms in the solid, although in molecular solids, heat capacities calculated per mole of molecules in molecular solids may be more than 3 R. For example, the heat capacity of water ice at the melting point is about 4.6 R per mole of molecules, but only 1.5 R per mole of atoms. As noted, heat capacity values far lower than 3 R “per atom” (as is the case with diamond and beryllium) result from “freezing out” of possible vibration modes for light atoms at suitably low temperatures, just as happens in many low-mass-atom gases at room temperatures (where vibrational modes are all frozen out). Because of high crystal binding energies, the effects of vibrational mode freezing are observed in solids more often than liquids: for example the heat capacity of liquid water is twice that of ice at near the same temperature, and is again close to the 3 R per mole of atoms of the Dulong–Petit theoretical maximum. Liquid phase

Note that the especially high molar values, as for paraffin, gasoline, water and ammonia, result from calculating specific heats in terms of moles of molecules. If specific heat is expressed per mole of atoms for these substances, none of the constant-volume values exceed, to any large extent, the theoretical Dulong–Petit limit of 25 J⋅mol−1 ⋅K−1 = 3 R per mole of atoms (see the last column of this table). Paraffin, for example, has very large molecules and thus a high heat capacity per mole, but as a substance it does not have remarkable heat capacity in terms of volume, mass, or atommol (which is just 1.41 R per mole of atoms, or less than half of most solids, in terms of heat capacity per atom). In the last column, major departures of solids at standard temperatures from the Dulong–Petit law value of 3 R, are usually due to low atomic weight plus high bond strength (as in diamond) causing some vibration modes to have too much energy to be available to store thermal energy at the measured temperature. For gases, departure from 3 R per mole of atoms in this table is generally due to two factors: (1) failure of the higher quantum-energy-spaced vibration modes in gas molecules to be excited at room temperature, and (2) loss of potential energy degree of freedom for small gas molecules, simply because most of their atoms are not bonded maximally in space to other atoms, as happens in many solids. A

Assuming an altitude of 194 metres above mean sea level (the world–wide median altitude of human habitation), an indoor temperature of 23 °C, a dewpoint of 9 °C (40.85% relative humidity), and 760 mm–Hg sea level–corrected barometric pressure (molar water vapor content = 1.16%). *Derived data by calculation. This is for water-rich tissues such as brain. The whole-body average figure for mammals is approximately 2.9 J⋅cm−3 ⋅K−1 [38]

A general theory of the heat capacity of liquids has not yet been achieved, and is still an active area of research. It was long thought that phonon theory is not able to explain the heat capacity of liquids, since liquids only sustain longitudinal, but not transverse phonons, which in solids are responsible for 2/3 of the heat capacity. However, Brillouin scattering experiments with neutrons and with X-rays, confirming an intuition of Yakov Frenkel,[27] have shown that transverse phonons do exist in liquids, albeit restricted to 8.1.6 Mass heat capacity of building materials frequencies above a threshold called the Frenkel frequency. Since most energy is contained in these high-frequency modes, a simple modification of the Debye model is suf- See also: Thermal mass ficient to yield a good approximation to experimental heat capacities of simple liquids.[28] (Usually of interest to builders and solar designers) Amorphous materials can be considered a type of liquid. The specific heat of amorphous materials has characteristic discontinuities at the glass transition temperature. These 8.1.7 Further reading discontinuities are frequently used to detect the glass transi• Encyclopædia Britannica, 2015, “Heat capacity (Altion temperature where a supercooled liquid transforms to ternate title: thermal capacity),” see , accessed 14 [29] a glass. February 2015.

8.1.5

Table of specific heat capacities

See also: List of thermal conductivities

• Emmerich Wilhelm & Trevor M. Letcher, Eds., 2010, Heat Capacities: Liquids, Solutions and Vapours, Cambridge, U.K.:Royal Society of Chemistry, ISBN 085404-176-1, see , accessed 14 February 2014. A very recent outline of selected traditional aspects of

8.1. HEAT CAPACITY

195

the title subject, including a recent specialist introduc- 8.1.10 References tion to its theory, Emmerich Wilhelm, “Heat Capacities: Introduction, Concepts, and Selected Applica- [1] Halliday, David; Resnick, Robert (2013). Fundamentals of Physics. Wiley. p. 524. tions” (Chapter 1, pp. 1–27), chapters on traditional and more contemporary experimental methods such [2] Kittel, Charles (2005). Introduction to Solid State Physics as photoacoustic methods, e.g., Jan Thoen & Christ (8th ed.). Hoboken, New Jesery, USA: John Wiley & Sons. p. 141. ISBN 0-471-41526-X. Glorieux, “Photothermal Techniques for Heat Capacities,” and chapters on newer research interests, includ[3] Blundell, Stephen (2001). Magnetism in Condensed Matter. ing on the heat capacities of proteins and other polyOxford Master Series in Condensed Matter Physics (1st ed.). meric systems (Chs. 16, 15), of liquid crystals (Ch. Hoboken, New Jesery, USA: Oxford University Press. p. 17), etc. 27. ISBN 978-0-19-850591-4.

8.1.8

See also

• Quantum statistical mechanics • Heat capacity ratio • Statistical mechanics • Thermodynamic equations • Thermodynamic databases for pure substances

[4] Kittel, Charles (2005). Introduction to Solid State Physics (8th ed.). Hoboken, New Jesery, USA: John Wiley & Sons. p. 141. ISBN 0-471-41526-X. [5] Laider, Keith J. (1993). The World of Physical Chemistry. Oxford University Press. ISBN 0-19-855919-4. [6] International Union of Pure and Applied Chemistry, Physical Chemistry Division. “Quantities, Units and Symbols in Physical Chemistry” (PDF). Blackwell Sciences. p. 7. The adjective specific before the name of an extensive quantity is often used to mean divided by mass.

• Heat equation

[7] International Bureau of Weights and Measures (2006), The International System of Units (SI) (PDF) (8th ed.), ISBN 92822-2213-6

• Heat transfer coefficient

[8] Lange’s Handbook of Chemistry, 10th ed. page 1524

• Latent heat

[9] “Water – Thermal Properties”. Engineeringtoolbox.com. Retrieved 2013-10-31.

• Material properties (thermodynamics) • Joback method (Estimation of heat capacities) • Specific melting heat • Specific heat of vaporization • Volumetric heat capacity • Thermal mass • R-value (insulation) • Storage heater

8.1.9

Notes

[1] IUPAC, Compendium of Chemical Terminology, 2nd ed. (the “Gold Book”) (1997). Online corrected version: (2006–) "Standard Pressure".. Besides being a round number, this had a very practical effect: relatively few people live and work at precisely sea level; 100 kPa equates to the mean pressure at an altitude of about 112 metres (which is closer to the 194–metre, world–wide median altitude of human habitation).

[10] Thermodynamics: An Engineering Approach by Yunus A. Cengal and Michael A. Boles [11] Yunus A. Cengel and Michael A. Boles,Thermodynamics: An Engineering Approach 7th Edition, , McGraw-Hill, 2010,ISBN 007-352932-X [12] Fraundorf, P. (2003). “Heat capacity in bits”. American Journal of Physics 71 (11): 1142. arXiv:cond-mat/9711074. Bibcode:2003AmJPh..71.1142F. doi:10.1119/1.1593658. [13] D. Lynden-Bell & R. M. Lynden-Bell (Nov 1977). “On the negative specific heat paradox”. Monthly Notices of the Royal Astronomical Society 181: 405–419. Bibcode:1977MNRAS.181..405L. doi:10.1093/mnras/181.3.405. [14] Lynden-Bell, D. (Dec 1998). “Negative Specific Heat in Astronomy, Physics and Chemistry”. PhysarXiv:cond-mat/9812172v1. ica A 263: 293–304. Bibcode:1999PhyA..263..293L. doi:10.1016/S03784371(98)00518-4. [15] Schmidt, Martin; Kusche, Robert; Hippler, Thomas; Donges, Jörn; Kronmüller, Werner; Issendorff, von, Bernd; Haberland, Hellmut (2001). “Negative Heat Capacity for a Cluster of 147 Sodium Atoms”. Physical Review Letters 86 (7): 1191–4. Bibcode:2001PhRvL..86.1191S. doi:10.1103/PhysRevLett.86.1191. PMID 11178041.

196

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

[16] See e.g., Wallace, David (2010). “Gravity, entropy, and cosmology: in search of clarity” (preprint). British Journal for the Philosophy of Science 61 (3): 513. arXiv:0907.0659. Bibcode:2010BJPS...61..513W. doi:10.1093/bjps/axp048. Section 4 and onwards. [17] Reif, F. (1965). Fundamentals of statistical and thermal physics. McGraw-Hill. pp. 253–254. [18] Charles Kittel; Herbert Kroemer (2000). Thermal physics. Freeman. p. 78. ISBN 0-7167-1088-9. [19] Media:Translational motion.gif [20] Smith, C. G. (2008). Quantum Physics and the Physics of large systems, Part 1A Physics. University of Cambridge. [21] The comparison must be made under constant-volume conditions—CvH—so that no work is performed. Nitrogen’s CvH (100 kPa, 20 °C) = 20.8 J mol−1 K−1 vs. the monatomic gases which equal 12.4717 J mol−1 K−1 . Citations: Freeman’s, W. H. “Physical Chemistry Part 3: Change Exercise 21.20b, Pg. 787” (PDF).

[35] “Heat Storage in Materials”. The Engineering Toolbox. [36] Crawford, R. J. Rotational molding of plastics. ISBN 159124-192-8. [37] Gaur, Umesh; Wunderlich, Bernhard (1981). “Heat capacity and other thermodynamic properties of linear macromolecules. II. Polyethylene” (PDF). Journal of Physical and Chemical Reference Data 10: 119. Bibcode:1981JPCRD..10..119G. doi:10.1063/1.555636. [38] Faber, P.; Garby, L. (1995). “Fat content affects heat capacity: a study in mice”. Acta Physiologica Scandinavica 153 (2): 185–7. doi:10.1111/j.1748-1716.1995.tb09850.x. PMID 7778459.

8.1.11

8.2

External links

Compressibility

“Incompressible” redirects here. For the property of vector fields, see Solenoidal vector field. For the topological [23] Petit A.-T., Dulong P.-L. (1819). “Recherches sur quelques property, see Incompressible surface. [22] Georgia State University. “Molar Specific Heats of Gases”.

points importants de la Théorie de la Chaleur”. Annales de Chimie et de Physique 10: 395–413. [24] “The Heat Capacity of a Solid” (PDF). [25] Hogan, C. (1969). “Density of States of an Insulating Ferromagnetic Alloy”. Physical Review 188 (2): 870. Bibcode:1969PhRv..188..870H. doi:10.1103/PhysRev.188.870.

In thermodynamics and fluid mechanics, compressibility is a measure of the relative volume change of a fluid or solid as a response to a pressure (or mean stress) change.

β=−

1 ∂V V ∂p

[26] Young; Geller (2008). Young and Geller College Physics (8th ed.). Pearson Education. ISBN 0-8053-9218-1.

where V is volume and p is pressure.

[27] In his textbook Kinetic Theory of Liquids (engl. 1947)

8.2.1

Definition

[28] Bolmatov, D.; Brazhkin, V. V.; Trachenko, K. (2012). “The phonon theory of liquid thermodynamics”. Scientific Reports 2. doi:10.1038/srep00421. Lay summary.

The specification above is incomplete, because for any object or system the magnitude of the compressibility depends strongly on whether the process is adiabatic or isothermal. [29] Ojovan, Michael I. (2008). “Viscosity and Glass Transition in Amorphous Oxides”. Advances in Condensed Accordingly isothermal compressibility is defined: Matter Physics 2008: 1. Bibcode:2008AdCMP2008....1O. doi:10.1155/2008/817829. [30] Page 183 in: Cornelius, Flemming (2008). Medical biophysics (6th ed.). ISBN 1-4020-7110-8. (also giving a density of 1.06 kg/L) [31] “Table of Specific Heats”. [32] “Iron”. National Institute of Standards and Technology.

βT = −

1 V

(

∂V ∂p

) T

where the subscript T indicates that the partial differential is to be taken at constant temperature Isentropic compressibility is defined: (

)

[33] “Materials Properties Handbook, Material: Lithium” (PDF). Archived from the original (PDF) on September 5, 2006.

βS = −

[34] “HCV (Molar Heat Capacity (cV)) Data for Methanol”. Dortmund Data Bank Software and Separation Technology.

where S is entropy. For a solid, the distinction between the two is usually negligible.

1 V

∂V ∂p

S

8.2. COMPRESSIBILITY

197

The minus sign makes the compressibility positive in the The deviation from ideal gas behavior tends to become par(usual) case that an increase in pressure induces a reduction ticularly significant (or, equivalently, the compressibility in volume. factor strays far from unity) near the critical point, or in the case of high pressure or low temperature. In these cases, a generalized compressibility chart or an alternative equation Relation to speed of sound of state better suited to the problem must be utilized to produce accurate results. The speed of sound is defined in classical mechanics as: A related situation occurs in hypersonic aerodynamics, where dissociation causes an increase in the “notational” ( ) molar volume, because a mole of oxygen, as O2 , becomes ∂p c2 = 2 moles of monatomic oxygen and N2 similarly dissociates ∂ρ S to 2N. Since this occurs dynamically as air flows over the where ρ is the density of the material. It follows, by re- aerospace object, it is convenient to alter Z, defined for an placing partial derivatives, that the isentropic compressibil- initial 30 gram mole of air, rather than track the varying ity can be expressed as: mean molecular weight, millisecond by millisecond. This pressure dependent transition occurs for atmospheric oxygen in the 2500 K to 4000 K temperature range, and in the 1 5000 K to 10,000 K range for nitrogen.[1] βS = 2 ρc In transition regions, where this pressure dependent dissociation is incomplete, both beta (the volume/pressure difRelation to bulk modulus ferential ratio) and the differential, constant pressure heat capacity will greatly increase. The inverse of the compressibility is called the bulk moduFor moderate pressures, above 10,000 K the gas further lus, often denoted K (sometimes B). That page also contains dissociates into free electrons and ions. Z for the resultsome examples for different materials. ing plasma can similarly be computed for a mole of initial The compressibility equation relates the isothermal com- air, producing values between 2 and 4 for partially or singly pressibility (and indirectly the pressure) to the structure of ionized gas. Each dissociation absorbs a great deal of enthe liquid. ergy in a reversible process and this greatly reduces the thermodynamic temperature of hypersonic gas decelerated near the aerospace object. Ions or free radicals transported to 8.2.2 Thermodynamics the object surface by diffusion may release this extra (nonthermal) energy if the surface catalyzes the slower recomMain article: Compressibility factor bination process. The isothermal compressibility is related to the isentropic The term “compressibility” is also used in thermodynamics (or adiabatic) compressibility by the relation, to describe the deviance in the thermodynamic properties of a real gas from those expected from an ideal gas. The compressibility factor is defined as α2 T βS = βT − ρcp pV Z= RT via Maxwell’s relations. More simply stated, where p is the pressure of the gas, T is its temperature, and V is its molar volume. In the case of an ideal gas, the compressibility factor Z is equal to unity, and the familiar ideal gas law is recovered:

p=

RT V

Z can, in general, be either greater or less than unity for a real gas.

βT =γ βS where,

γ is the heat capacity ratio. See here for a derivation.

198

8.2.3

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

Earth science

8.2.5

Compressibility is used in the Earth sciences to quantify the ability of a soil or rock to reduce in volume with applied pressure. This concept is important for specific storage, when estimating groundwater reserves in confined aquifers. Geologic materials are made up of two portions: solids and voids (or same as porosity). The void space can be full of liquid or gas. Geologic materials reduces in volume only when the void spaces are reduced, which expel the liquid or gas from the voids. This can happen over a period of time, resulting in settlement. It is an important concept in geotechnical engineering in the design of certain structural foundations. For example, the construction of high-rise structures over underlying layers of highly compressible bay mud poses a considerable design constraint, and often leads to use of driven piles or other innovative techniques.

8.2.4

Main article: Navier-Stokes equations § Compressible flow of Newtonian fluids The degree of compressibility of a fluid has strong implications for its dynamics. Most notably, the propagation of sound is dependent on the compressibility of the medium.

8.2.6

See also

• Poisson ratio • Mach number • Prandtl-Glauert singularity, associated with supersonic flight. • Shear strength

References

[1] Regan, Frank J. Dynamics of Atmospheric Re-entry. p. 313. ISBN 1-56347-048-9. [2] Domenico, P. A.; Mifflin, M. D. (1965). “Water from low permeability sediments and land subsidence”. Water Resources Research 1 (4): 563–576. Bibcode:1965WRR.....1..563D. doi:10.1029/WR001i004p00563. OSTI 5917760. [3] Hugh D. Young; Roger A. Freedman. University Physics with Modern Physics. Addison-Wesley; 2012. ISBN 9780-321-69686-1. p. 356.

Aeronautical dynamics Main article: Aerodynamics sign_issues_with_increasing_speed

In general, the bulk compressibility (sum of the linear compressibilities on the three axes) is positive, i.e. an increase in pressure squeezes the material to a smaller volume. This condition is required for mechanical stability.[5] However, under very specific conditions the compressibility can be negative.[6]

8.2.7

Fluid dynamics

Negative compressibility

§

De-

Compressibility is an important factor in aerodynamics. At low speeds, the compressibility of air is not significant in relation to aircraft design, but as the airflow nears and exceeds the speed of sound, a host of new aerodynamic effects become important in the design of aircraft. These effects, often several of them at a time, made it very difficult for World War II era aircraft to reach speeds much beyond 800 km/h (500 mph). Many effects are often mentioned in conjunction with the term “compressibility”, but regularly have little to do with the compressible nature of air. From a strictly aerodynamic point of view, the term should refer only to those sideeffects arising as a result of the changes in airflow from an incompressible fluid (similar in effect to water) to a compressible fluid (acting as a gas) as the speed of sound is approached. There are two effects in particular, wave drag and critical mach.

[4] Fine, Rana A.; Millero, F. J. (1973). “Compressibility of water as a function of temperature and pressure”. Journal of Chemical Physics 59 (10): 5529–5536. Bibcode:1973JChPh..59.5529F. doi:10.1063/1.1679903. [5] Munn, R. W. (1971). “Role of the elastic constants in negative thermal expansion of axial solids”. Journal of Physics C: Solid State Physics 5: 535– 542. Bibcode:1972JPhC....5..535M. doi:10.1088/00223719/5/5/005. [6] Lakes, Rod; Wojciechowski, K. W. (2008). “Negative compressibility, negative Poisson’s ratio, and stability”. Physica Status Solidi (b) Bibcode:2008PSSBR.245..545L. 245 (3): 545. doi:10.1002/pssb.200777708. Gatt, Ruben; Grima, Joseph N. (2008). “Negative compressibility”. Physica status solidi (RRL) - Rapid Research Bibcode:2008PSSRR...2..236G. Letters 2 (5): 236. doi:10.1002/pssr.200802101. Kornblatt, J. A. (1998). “Materials with Negative Compressibilities”. Science 281 Bibcode:1998Sci...281..143K. (5374): 143a.

8.3. THERMAL EXPANSION

doi:10.1126/science.281.5374.143a. Moore, B.; Jaglinski, T.; Stone, D. S.; Lakes, R. S. (2006). “Negative incremental bulk modulus in foams”. Philosophical Magazine Letters Bibcode:2006PMagL..86..651M. 86 (10): 651. doi:10.1080/09500830600957340.

199 change in temperature is called the material’s coefficient of thermal expansion and generally varies with temperature.

8.3.1

Overview

Predicting expansion

8.3 Thermal expansion

If an equation of state is available, it can be used to predict the values of the thermal expansion at all the required temperatures and pressures, along with many other state functions. Contraction effects (negative thermal expansion) A number of materials contract on heating within certain temperature ranges; this is usually called negative thermal expansion, rather than “thermal contraction”. For example, the coefficient of thermal expansion of water drops to zero as it is cooled to 3.983 °C and then becomes negative below this temperature; this means that water has a maximum density at this temperature, and this leads to bodies of water maintaining this temperature at their lower depths during extended periods of sub-zero weather. Also, fairly pure silicon has a negative coefficient of thermal expansion for temperatures between about 18 and 120 Kelvin.[2] Factors affecting thermal expansion Unlike gases or liquids, solid materials tend to keep their shape when undergoing thermal expansion.

Expansion joint in a road bridge used to avoid damage from thermal expansion.

Thermal expansion generally decreases with increasing bond energy, which also has an effect on the melting point of solids, so, high melting point materials are more likely to have lower thermal expansion. In general, liquids expand slightly more than solids. The thermal expansion of glasses is higher compared to that of crystals.[3] At the glass transition temperature, rearrangements that occur in an amorphous material lead to characteristic discontinuities of coefficient of thermal expansion or specific heat. These discontinuities allow detection of the glass transition temperature where a supercooled liquid transforms to a glass.[4]

Thermal expansion is the tendency of matter to change in shape, area, and volume in response to a change in Absorption or desorption of water (or other solvents) can change the size of many common materials; many organic temperature,[1] through heat transfer. materials change size much more due to this effect than they Temperature is a monotonic function of the average molec- do to thermal expansion. Common plastics exposed to waular kinetic energy of a substance. When a substance is ter can, in the long term, expand by many percent. heated, the kinetic energy of its molecules increases. Thus, the molecules begin moving more and usually maintain a greater average separation. Materials which contract with 8.3.2 Coefficient of thermal expansion increasing temperature are unusual; this effect is limited in size, and only occurs within limited temperature ranges (see The coefficient of thermal expansion describes how the examples below). The degree of expansion divided by the size of an object changes with a change in temperature.

200

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

Specifically, it measures the fractional change in size per degree change in temperature at a constant pressure. Several types of coefficients have been developed: volumetric, area, and linear. Which is used depends on the particular application and which dimensions are considered important. For solids, one might only be concerned with the change along a length, or over some area.

the size of an object and so it is not usually necessary to consider the effect of pressure changes.

Common engineering solids usually have coefficients of thermal expansion that do not vary significantly over the range of temperatures where they are designed to be used, so where extremely high accuracy is not required, practical calculations can be based on a constant, average, value of The volumetric thermal expansion coefficient is the most the coefficient of expansion. basic thermal expansion coefficient, and the most relevant for fluids. In general, substances expand or contract when their temperature changes, with expansion or contrac- Linear expansion tion occurring in all directions. Substances that expand at the same rate in every direction are called isotropic. For isotropic materials, the area and volumetric thermal expansion coefficient are, respectively, approximately twice and three times larger than the linear thermal expansion coefficient. Mathematical definitions of these coefficients are defined below for solids, liquids, and gases. Change in length of a rod due to thermal expansion. Linear expansion means change in one dimension (length) as opposed to change in volume (volumetric expansion). To a first approximation, the change in length measurements of In the general case of a gas, liquid, or solid, the volumetric an object due to thermal expansion is related to temperature coefficient of thermal expansion is given by change by a “linear expansion coefficient”. It is the fractional change in length per degree of temperature change. ( ) Assuming negligible effect of pressure, we may write: 1 ∂V αV = V ∂T p 1 dL The subscript p indicates that the pressure is held constant αL = L dT during the expansion, and the subscript V stresses that it is the volumetric (not linear) expansion that enters this general where L is a particular length measurement and dL/dT is definition. In the case of a gas, the fact that the pressure is the rate of change of that linear dimension per unit change held constant is important, because the volume of a gas will in temperature. vary appreciably with pressure as well as temperature. For The change in the linear dimension can be estimated to be: a gas of low density this can be seen from the ideal gas law General volumetric thermal expansion coefficient

8.3.3

Expansion in solids

When calculating thermal expansion it is necessary to consider whether the body is free to expand or is constrained. If the body is free to expand, the expansion or strain resulting from an increase in temperature can be simply calculated by using the applicable coefficient of thermal expansion If the body is constrained so that it cannot expand, then internal stress will be caused (or changed) by a change in temperature. This stress can be calculated by considering the strain that would occur if the body were free to expand and the stress required to reduce that strain to zero, through the stress/strain relationship characterised by the elastic or Young’s modulus. In the special case of solid materials, external ambient pressure does not usually appreciably affect

∆L = αL ∆T L This equation works well as long as the linear-expansion coefficient does not change much over the change in temperature ∆T , and the fractional change in length is small ∆L/L ≪ 1 . If either of these conditions does not hold, the equation must be integrated. Effects on strain For solid materials with a significant length, like rods or cables, an estimate of the amount of thermal expansion can be described by the material strain, given by ϵthermal and defined as:

ϵthermal =

(Lfinal − Linitial ) Linitial

8.3. THERMAL EXPANSION

201

where Linitial is the length before the change of temperature Volume expansion and Lfinal is the length after the change of temperature. For a solid, we can ignore the effects of pressure on the For most solids, thermal expansion is proportional to the material, and the volumetric thermal expansion coefficient change in temperature: can be written:[5] ϵthermal ∝ ∆T

αV =

1 dV V dT

Thus, the change in either the strain or temperature can be where V is the volume of the material, and dV /dT is the estimated by: rate of change of that volume with temperature. This means that the volume of a material changes by some fixed fractional amount. For example, a steel block with a volume of 1 cubic meter might expand to 1.002 cubic meters when the temperature is raised by 50 K. This is an exwhere pansion of 0.2%. If we had a block of steel with a volume of 2 cubic meters, then under the same conditions, it would expand to 2.004 cubic meters, again an expansion of 0.2%. ∆T = (Tfinal − Tinitial ) The volumetric expansion coefficient would be 0.2% for 50 −1 is the difference of the temperature between the two K, or 0.004% K . recorded strains, measured in degrees Celsius or Kelvin, and If we already know the expansion coefficient, then we can αL is the linear coefficient of thermal expansion in “per calculate the change in volume degree Celsius” or “per Kelvin”, denoted by °C−1 or K−1 , respectively. In the field of continuum mechanics, the thermal expansion and its effects are treated as eigenstrain and ∆V = αV ∆T eigenstress. V ϵthermal = αL ∆T

Area expansion The area thermal expansion coefficient relates the change in a material’s area dimensions to a change in temperature. It is the fractional change in area per degree of temperature change. Ignoring pressure, we may write:

αA =

1 dA A dT

where ∆V /V is the fractional change in volume (e.g., 0.002) and ∆T is the change in temperature (50 °C). The above example assumes that the expansion coefficient did not change as the temperature changed and the increase in volume is small compared to the original volume. This is not always true, but for small changes in temperature, it is a good approximation. If the volumetric expansion coefficient does change appreciably with temperature, or the increase in volume is significant, then the above equation will have to be integrated:

( ) ∫ Tf where A is some area of interest on the object, and dA/dT V + ∆V = αV (T ) dT is the rate of change of that area per unit change in temper- ln V Ti ature. (∫ ) Tf The change in the area can be estimated as: ∆V = exp αV (T ) dT − 1 V Ti ∆A = αA ∆T A

where αV (T ) is the volumetric expansion coefficient as a function of temperature T, and Ti , Tf are the initial and final temperatures respectively.

This equation works well as long as the area expansion coefficient does not change much over the change in temperature ∆T , and the fractional change in area is small Isotropic materials For isotropic materials the volumet∆A/A ≪ 1 . If either of these conditions does not hold, ric thermal expansion coefficient is three times the linear the equation must be integrated. coefficient:

202

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

8.3.4 αV = 3αL

Isobaric expansion in gases

For an ideal gas, the volumetric thermal expansion (i.e., relative change in volume due to temperature change) depends on the type of process in which temperature is changed. Two simple cases are isobaric change, where pressure is held constant, and adiabatic change, where no heat is exchanged with the environment.

This ratio arises because volume is composed of three mutually orthogonal directions. Thus, in an isotropic material, for small differential changes, one-third of the volumetric expansion is in a single axis. As an example, take a cube of steel that has sides of length L. The original volume will be V = L3 and the new volume, after a temperature increase, The ideal gas law can be written as: will be pv = T

∆L V +∆V = (L+∆L) = L +3L ∆L+3L∆L +∆L ≈ L where +3L2 ∆L +3V p is = theVpressure, L v is the specific volume, and t is temperature measured in energy units. By taking the logaWe can make the substitutions ∆V = αV L3 ∆T and, for rithm of this equation: isotropic materials, ∆L = αL L∆T . We now have: 3

3

2

2

3

3

ln (v) + ln (p) = ln (T ) 2 3 V +∆V = (L+LαL ∆T )3 = L3 +3L3 αL ∆T +3L3 αL ∆T 2 +L3 αL ∆T 3 ≈ L3 +3L3 αL ∆T Then by definition of isobaric thermal expansion coefficient, Since the volumetric and linear coefficients are defined only with the above equation of state: for extremely small temperature and dimensional changes (that is, when ∆T and ∆L are small), the last two terms ( ) ( ) d(ln v) d(ln T ) 1 can be ignored and we get the above relationship between γ ≡ 1 ∂v = = = . p v ∂T p dT dT T the two coefficients. If we are trying to go back and forth p between volumetric and linear coefficients using larger values of ∆T then we will need to take into account the third The index p denotes an isobaric process. term, and sometimes even the fourth term. Similarly, the area thermal expansion coefficient is two 8.3.5 Expansion in liquids times the linear coefficient: Theoretically, the coefficient of linear expansion can be found from the coefficient of volumetric expansion (αV ≈ 3α). However, for liquids, α is calculated through the exαA = 2αL perimental determination of αV. This ratio can be found in a way similar to that in the linear example above, noting that the area of a face on the cube is just L2 . Also, the same considerations must be made when 8.3.6 Expansion in mixtures and alloys dealing with large values of ∆T . The expansivity of the components of the mixture can cancel each other like in invar. Anisotropic materials The thermal expansivity of a mixture from the expansiviMaterials with anisotropic structures, such as crystals (with ties of the pure components and their excess expansivities less than cubic symmetry) and many composites, will gener- follow from: ally have different linear expansion coefficients αL in different directions. As a result, the total volumetric expan∑ ∂Vi ∑ ∂V E i sion is distributed unequally among the three axes. If the ∂V = + ∂T ∂T ∂T crystal symmetry is monoclinic or triclinic, even the angles i i between these axes are subject to thermal changes. In such ∑ ∑ αi Vi + αiE ViE cases it is necessary to treat the coefficient of thermal ex- α = i i pansion as a tensor with up to six independent elements. A good way to determine the elements of the tensor is to study ∂ V¯E i ∂(ln(γi )) ∂2 = R + RT ln(γi ) the expansion by powder diffraction. ∂T ∂P ∂T ∂P

8.3. THERMAL EXPANSION

8.3.7

Apparent and absolute expansion

When measuring the expansion of a liquid, the measurement must account for the expansion of the container as well. For example, a flask that has been constructed with a long narrow stem filled with enough liquid that the stem itself is partially filled, when placed in a heat bath will initially show the column of liquid in the stem to drop followed by the immediate increase of that column until the flask-liquid-heat bath system has thermalized. The initial observation of the column of liquid dropping is not due to an initial contraction of the liquid but rather the expansion of the flask as it contacts the heat bath first. Soon after, the liquid in the flask is heated by the flask itself and begins to expand. Since liquids typically have a greater expansion over solids, the liquid in the flask eventually exceeds that of the flask, causing the column of liquid in the flask to rise. A direct measurement of the height of the liquid column is a measurement of the apparent expansion of the liquid. The absolute expansion of the liquid is the apparent expansion corrected for the expansion of the containing vessel.[6]

8.3.8

Examples and applications

203 shaft, and allowing it to cool after it has been pushed over the shaft, thus achieving a 'shrink fit'. Induction shrink fitting is a common industrial method to pre-heat metal components between 150 °C and 300 °C thereby causing them to expand and allow for the insertion or removal of another component. There exist some alloys with a very small linear expansion coefficient, used in applications that demand very small changes in physical dimension over a range of temperatures. One of these is Invar 36, with α approximately equal to 0.6×10−6 K−1 . These alloys are useful in aerospace applications where wide temperature swings may occur. Pullinger’s apparatus is used to determine the linear expansion of a metallic rod in the laboratory. The apparatus consists of a metal cylinder closed at both ends (called a steam jacket). It is provided with an inlet and outlet for the steam. The steam for heating the rod is supplied by a boiler which is connected by a rubber tube to the inlet. The center of the cylinder contains a hole to insert a thermometer. The rod under investigation is enclosed in a steam jacket. One of its ends is free, but the other end is pressed against a fixed screw. The position of the rod is determined by a micrometer screw gauge or spherometer.

For applications using the thermal expansion property, see bi-metal and mercury-in-glass thermometer. The expansion and contraction of materials must be con-

Thermal expansion of long continuous sections of rail tracks is the driving force for rail buckling. This phenomenon resulted in 190 train derailments during 1998–2002 in the US alone.[7]

sidered when designing large structures, when using tape or chain to measure distances for land surveys, when designing molds for casting hot material, and in other engineering applications when large changes in dimension due to temperature are expected. Thermal expansion is also used in mechanical applications to fit parts over one another, e.g. a bushing can be fitted over a shaft by making its inner diameter slightly smaller than the diameter of the shaft, then heating it until it fits over the

Drinking glass with fracture due to uneven thermal expansion after pouring of hot liquid into the otherwise cool glass

The control of thermal expansion in brittle materials is a key concern for a wide range of reasons. For example, both glass and ceramics are brittle and uneven temperature causes uneven expansion which again causes thermal stress and this might lead to fracture. Ceramics need to be joined or work in consort with a wide range of materials and therefore their expansion must be matched to the application. Because glazes need to be firmly attached to the underlying porcelain (or other body type) their thermal expansion must be tuned to 'fit' the body so that crazing or shivering do not occur. Good example of products whose thermal expansion is the key to their success are CorningWare and

204 the spark plug. The thermal expansion of ceramic bodies can be controlled by firing to create crystalline species that will influence the overall expansion of the material in the desired direction. In addition or instead the formulation of the body can employ materials delivering particles of the desired expansion to the matrix. The thermal expansion of glazes is controlled by their chemical composition and the firing schedule to which they were subjected. In most cases there are complex issues involved in controlling body and glaze expansion, adjusting for thermal expansion must be done with an eye to other properties that will be affected, generally trade-offs are required.

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES which is constrained to flow in only one direction (along the tube) due to changes in volume brought about by changes in temperature. A bi-metal mechanical thermometer uses a bimetallic strip and bends due to the differing thermal expansion of the two metals. Metal pipes made of different materials are heated by passing steam through them. While each pipe is being tested, one end is securely fixed and the other rests on a rotating shaft, the motion of which is indicated with a pointer. The linear expansion of the different metals is compared qualitatively and the coefficient of linear thermal expansion is calculated.

Thermal expansion can have a noticeable effect in gasoline stored in above ground storage tanks which can cause gasoline pumps to dispense gasoline which may be more com- 8.3.9 Thermal expansion coefficients for pressed than gasoline held in underground storage tanks in various materials the winter time or less compressed than gasoline held in underground storage tanks in the summer time.[8] Main article: Thermal expansion coefficients of the eleHeat-induced expansion has to be taken into account in ments (data page) This section summarizes the coefficients for some common most areas of engineering. A few examples are: Coefficient de dilatation volumique isobare (modèle Tait)

• Metal framed windows need rubber spacers • Rubber tires

2 000 0 bar 125 bar 250 bar 375 bar 500 bar

1 800

1 600

• Metal hot water heating pipes should not be used in long straight lengths

1 400

1 200

• Large structures such as railways and bridges need expansion joints in the structures to avoid sun kink

1 000

800

• One of the reasons for the poor performance of cold car engines is that parts have inefficiently large spacings until the normal operating temperature is achieved.

600

400

200 20

40

60

80

100

120

140

160

180

200

220

240

260

• A gridiron pendulum uses an arrangement of different metals to maintain a more temperature stable pendu- Volumetric thermal expansion coefficient for a semicrystalline lum length. polypropylene. • A power line on a hot day is droopy, but on a cold day it is tight. This is because the metals expand under materials. heat. For isotropic materials the coefficients linear thermal ex• Expansion joints that absorb the thermal expansion in pansion α and volumetric thermal expansion αV are related by αV = 3α. For liquids usually the coefficient of volua piping system.[9] metric expansion is listed and linear expansion is calculated • Precision engineering nearly always requires the engi- here for comparison. neer to pay attention to the thermal expansion of the For common materials like many metals and compounds, product. For example, when using a scanning electron the thermal expansion coefficient is inversely proportional microscope even small changes in temperature such as to the melting point.[10] In particular for metals the relation 1 degree can cause a sample to change its position rel- is: ative to the focus point. Thermometers are another application of thermal expan0.020 sion – most contain a liquid (usually mercury or alcohol) α ≈ MP

8.3. THERMAL EXPANSION

205

Coefficients de dilatation linéiques 18 17.5 X2CrNi12 (1.4003, 403) X20Cr13 (1.4021, 420) C35E (1.1181, 1035) X2CrNiMoN22-5-3 (1.4462, 2205) X2CrNiMo17-12-2 (1.4404, 316L)

17 16.5 16 15.5

[2] Bullis, W. Murray (1990). “Chapter 6”. In O'Mara, William C.; Herring, Robert B.; Hunt, Lee P. Handbook of semiconductor silicon technology. Park Ridge, New Jersey: Noyes Publications. p. 431. ISBN 0-8155-1237-6. Retrieved 2010-07-11.

15

[3] Varshneya, A. K. (2006). Fundamentals of inorganic glasses. Sheffield: Society of Glass Technology. ISBN 012-714970-8.

14.5 14 13.5 13 12.5

[4] Ojovan, M. I. (2008). “Configurons: thermodynamic parameters and symmetry changes at glass transition”. Entropy 10 (3): 334–364. Bibcode:2008Entrp..10..334O. doi:10.3390/e10030334.

12 11.5 11 10.5 10 100

150

200

250

300

350

400

450

500

550

600

Linear thermal expansion coefficient for some steel grades.

for halides and oxides

0.038 α≈ − 7.0 · 10−6 K−1 MP

[5] Turcotte, Donald L.; Schubert, Gerald (2002). Geodynamics (2nd ed.). Cambridge. ISBN 0-521-66624-4. [6] Ganot, A., Atkinson, E. (1883). Elementary treatise on physics experimental and applied for the use of colleges and schools, William and Wood & Co, New York, pp. 272–73. [7] Track Buckling Research. Volpe Center, U.S. Department of Transportation [8] Cost or savings of thermal expansion in above ground tanks.

Artofbeingcheap.com (2013-09-06). Retrieved 2014-01In the table below, the range for α is from 10−7 K−1 for 19. hard solids to 10−3 K−1 for organic liquids. The coefficient α varies with the temperature and some materials have [9] Lateral, Angular and Combined Movements U.S. Bellows. a very high variation ; see for example the variation vs. temperature of the volumetric coefficient for a semicrys- [10] MIT Lecture Sheer and Thermal Expansion Tensors – Part 1 talline polypropylene (PP) at different pressure, and the variation of the linear coefficient vs. temperature for some [11] “Thermal Expansion”. Western Washington University. Archived from the original on 2009-04-17. steel grades (from bottom to top: ferritic stainless steel, martensitic stainless steel, carbon steel, duplex stainless [12] Ahmed, Ashraf; Tavakol, Behrouz; Das, Rony; Joven, steel, austenitic steel).

(The formula αV ≈ 3α is usually used for solids.)[11]

8.3.10

See also

Ronald; Roozbehjavan, Pooneh; Minaie, Bob (2012). Study of Thermal Expansion in Carbon Fiber Reinforced Polymer Composites. Proceedings of SAMPE International Symposium. Charleston, SC.

• Negative thermal expansion

[13] Young; Geller. Young and Geller College Physics (8th ed.). ISBN 0-8053-9218-1.

• Mie-Gruneisen equation of state

[14] “Technical Glasses Data Sheet” (PDF). schott.com.

• Autovent

[15] Raymond Serway; John Jewett (2005), Principles of Physics: A Calculus-Based Text, Cengage Learning, p. 506, ISBN 0-534-49143-X

• Grüneisen parameter • Apparent molar property

8.3.11

References

[1] when the body is heated its dimension(size) increase.This increase in dimension is called thermal expansion . Paul A., Tipler; Gene Mosca (2008). Physics for Scientists and Engineers, Volume 1 (6th ed.). New York, NY: Worth Publishers. pp. 666–670. ISBN 1-4292-0132-0.

[16] “DuPont™ Kapton® matweb.com.

200EN

Polyimide

Film”.

[17] “Macor data sheet” (PDF). corning.com. [18] “Properties of Common Liquid Materials”. [19] “WDSC 340. Class Notes on Thermal Properties of Wood”. forestry.caf.wvu.edu. Archived from the original on 200903-30.

206

CHAPTER 8. CHAPTER 8. MATERIAL PROPERTIES

[20] Richard C. Weatherwax; Alfred J. Stamm (1956). The coefficients of thermal expansion of wood and wood products (PDF) (Technical report). Forest Products Laboratory, United States Forest Service. 1487. [21] “Sapphire” (PDF). kyocera.com. [22] “Basic Parameters of Silicon Carbide (SiC)". Ioffe Institute. [23] Becker, P.; Seyfried, P.; Siegert, H. (1982). “The lattice parameter of highly pure silicon single crystals”. Zeitschrift für Physik B 48: 17. Bibcode:1982ZPhyB..48...17B. doi:10.1007/BF02026423. [24] Nave, Rod. “Thermal Expansion Coefficients at 20 C”. Georgia State University. [25] “Sitall CO-115M (Astrositall)". Star Instruments. [26] Thermal Expansion table [27] Salvador, James R.; Guo, Fu; Hogan, Tim; Kanatzidis, Mercouri G. (2003). “Zero thermal expansion in YbGaGe due to an electronic valence transition”. Nature 425 (6959): 702–5. Bibcode:2003Natur.425..702S. doi:10.1038/nature02011. PMID 14562099. [28] Janssen, Y.; Change, S.; Cho, B.K.; Llobet, A.; Dennis, K.W.; McCallum, R.W.; Mc Queeney, R.J.; Canfeld, P.C. (2005). “YbGaGe: normal thermal expansion”. Journal of Alloys and Compounds 389: 10–13. doi:10.1016/j.jallcom.2004.08.012.

8.3.12

External links

• Glass Thermal Expansion Thermal expansion measurement, definitions, thermal expansion calculation from the glass composition • Water thermal expansion calculator • DoITPoMS Teaching and Learning Package on Thermal Expansion and the Bi-material Strip • Engineering Toolbox – List of coefficients of Linear Expansion for some common materials • Article on how αV is determined • MatWeb: Free database of engineering properties for over 79,000 materials • USA NIST Website – Temperature and Dimensional Measurement workshop • Hyperphysics: Thermal expansion • Understanding Thermal Expansion in Ceramic Glazes

Chapter 9

Chapter 9. Potentials 9.1 Thermodynamic potential A thermodynamic potential is a scalar quantity used to represent the thermodynamic state of a system. The concept of thermodynamic potentials was introduced by Pierre Duhem in 1886. Josiah Willard Gibbs in his papers used the term fundamental functions. One main thermodynamic potential that has a physical interpretation is the internal energy U. It is the energy of configuration of a given system of conservative forces (that is why it is a potential) and only has meaning with respect to a defined set of references (or data). Expressions for all other thermodynamic energy potentials are derivable via Legendre transforms from an expression for U. In thermodynamics, certain forces, such as gravity, are typically disregarded when formulating expressions for potentials. For example, while all the working fluid in a steam engine may have higher energy due to gravity while sitting on top of Mount Everest than it would at the bottom of the Mariana Trench, the gravitational potential energy term in the formula for the internal energy would usually be ignored because changes in gravitational potential within the engine during operation would be negligible.

Just as in mechanics, where potential energy is defined as capacity to do work, similarly different potentials have different meanings: • Internal energy (U) is the capacity to do work plus the capacity to release heat. • Gibbs energy (G) is the capacity to do non-mechanical work. • Enthalpy (H) is the capacity to do non-mechanical work plus the capacity to release heat. • Helmholtz free energy (F) is the capacity to do mechanical work (useful work).

From these definitions we can say that ΔU is the energy added to the system, ΔF is the total work done on it, ΔG is the non-mechanical work done on it, and ΔH is the sum of non-mechanical work done on the system and the heat given to it. Thermodynamic potentials are very useful when calculating the equilibrium results of a chemical reaction, or when measuring the properties of materials in a chemical reaction. The chemical reactions usually take place under some simple constraints such as constant pressure and tem9.1.1 Description and interpretation perature, or constant entropy and volume, and when this is true, there is a corresponding thermodynamic potential that Five common thermodynamic potentials are:[1] comes into play. Just as in mechanics, the system will tend where T = temperature, S = entropy, p = pressure, V = towards lower values of potential and at equilibrium, under volume. The Helmholtz free energy is often denoted by these constraints, the potential will take on an unchanging the symbol F, but the use of A is preferred by IUPAC,[2] minimum value. The thermodynamic potentials can also be ISO and IEC.[3] Nᵢ is the number of particles of type i in used to estimate the total amount of energy available from the system and μᵢ is the chemical potential for an i-type par- a thermodynamic system under the appropriate constraint. ticle. For the sake of completeness, the set of all Nᵢ are also In particular: (see principle of minimum energy for a included as natural variables, although they are sometimes derivation)[4] ignored. These five common potentials are all energy potentials, but there are also entropy potentials. The thermodynamic square can be used as a tool to recall and derive some of the potentials. 207

• When the entropy (S ) and “external parameters” (e.g. volume) of a closed system are held constant, the internal energy (U ) decreases and reaches a minimum value at equilibrium. This follows from the first and

208

CHAPTER 9. CHAPTER 9. POTENTIALS

second laws of thermodynamics and is called the prin- The definitions of the thermodynamic potentials may be ciple of minimum energy. The following three state- differentiated and, along with the first and second laws of ments are directly derivable from this principle. thermodynamics, a set of differential equations known as the fundamental equations follow.[7] (Actually they are all • When the temperature (T ) and external parameters of expressions of the same fundamental thermodynamic relaa closed system are held constant, the Helmholtz free tion, but are expressed in different variables.) By the first energy (F ) decreases and reaches a minimum value at law of thermodynamics, any differential change in the interequilibrium. nal energy U of a system can be written as the sum of heat flowing into the system and work done by the system on the • When the pressure (p) and external parameters of a environment, along with any change due to the addition of closed system are held constant, the enthalpy (H ) de- new particles to the system: creases and reaches a minimum value at equilibrium. ∑ • When the temperature (T ), pressure (p) and external dU = δQ − δW + µi dNi parameters of a closed system are held constant, the i Gibbs free energy (G ) decreases and reaches a miniwhere δQ is the infinitesimal heat flow into the system, and mum value at equilibrium. δW is the infinitesimal work done by the system, μᵢ is the chemical potential of particle type i and Nᵢ is the number of type i particles. (Note that neither δQ nor δW are exact dif9.1.2 Natural variables ferentials. Small changes in these variables are, therefore, The variables that are held constant in this process are represented with δ rather than d.) termed the natural variables of that potential.[5] The natu- By the second law of thermodynamics, we can express the ral variables are important not only for the above-mentioned internal energy change in terms of state functions and their reason, but also because if a thermodynamic potential can differentials. In case of reversible changes we have: be determined as a function of its natural variables, all of the thermodynamic properties of the system can be found by taking partial derivatives of that potential with respect to δQ = T dS its natural variables and this is true for no other combination of variables. On the converse, if a thermodynamic potential δW = p dV is not given as a function of its natural variables, it will not, where in general, yield all of the thermodynamic properties of the system. T is temperature, Notice that the set of natural variables for the above four poS is entropy, tentials are formed from every combination of the T-S and p is pressure, P-V variables, excluding any pairs of conjugate variables. There is no reason to ignore the Ni − μi conjugate pairs, and in fact we may define four additional potentials for and V is volume, and the equality holds for reversible proeach species.[6] Using IUPAC notation in which the brack- cesses. ets contain the natural variables (other than the main four), This leads to the standard differential form of the internal we have: energy in case of a quasistatic reversible change: If there is only one species, then we are done. But, if there are, say, two species, then there will be additional potentials such as U [µ1 , µ2 ] = U − µ1 N1 − µ2 N2 and so on. If there are D dimensions to the thermodynamic space, then there are 2D unique thermodynamic potentials. For the most simple case, a single phase ideal gas, there will be three dimensions, yielding eight thermodynamic potentials.

9.1.3

The fundamental equations

Main article: Fundamental thermodynamic relation

dU = T dS − pdV +



µi dNi

i

Since U, S and V are thermodynamic functions of state, the above relation holds also for arbitrary non-reversible changes. If the system has more external variables than just the volume that can change, the fundamental thermodynamic relation generalizes to:

dU = T dS −

∑ i

Xi dxi +

∑ j

µj dNj

9.1. THERMODYNAMIC POTENTIAL

209 ) ( ) ∂H ∂G = ∂p S,{Ni } ∂p T,{Ni } ( ) ) ( ∂G ∂F −S = = ∂T p,{Ni } ∂T V,{Ni } ( ) ∂ϕ µj = ∂Nj X,Y,{Ni̸=j }

Here the Xᵢ are the generalized forces corresponding to the +V = external variables xᵢ. Applying Legendre transforms repeatedly, the following differential relations hold for the four potentials:

(

Note that the infinitesimals on the right-hand side of each of the above equations are of the natural variables of the potential on the left-hand side. Similar equations can be developed for all of the other thermodynamic potentials of the system. There will be one fundamental equation for each where, in the last equation, ϕ is any of the thermodynamic thermodynamic potential, resulting in a total of 2D funda- potentials U, F, H, G and X, Y, {Nj̸=i } are the set of natural variables for that potential, excluding Nᵢ . If we use all mental equations. potentials, then we will have more equations of state such The differences between the four thermodynamic potentials as can be summarized as follows: ( −Nj =

d(pV ) = dH − dU = dG − dF

∂U [µj ] ∂µj

) S,V,{Ni̸=j }

d(T S) = dU − dF = dH − dG

and so on. In all, there will be D equations for each potential, resulting in a total of D 2D equations of state. If the D equations of state for a particular potential are known, 9.1.4 The equations of state then the fundamental equation for that potential can be determined. This means that all thermodynamic information We can use the above equations to derive some differenabout the system will be known, and that the fundamental tial definitions of some thermodynamic parameters. If we equations for any other potential can be found, along with define Φ to stand for any of the thermodynamic potentials, the corresponding equations of state. then the above equations are of the form:

dΦ =



9.1.5

xi dyi

i

The Maxwell relations

Main article: Maxwell relations

where xᵢ and yᵢ are conjugate pairs, and the yᵢ are the natural variables of the potential Φ. From the chain rule it follows Again, define xᵢ and yᵢ to be conjugate pairs, and the yᵢ to that: be the natural variables of some potential Φ. We may take the “cross differentials” of the state equations, which obey the following relationship: ( ) ∂Φ xj = ∂yj {yi̸=j } ( ) ( ) ( ) ( ) ∂ ∂Φ ∂ ∂Φ = Where yi ≠ j is the set of all natural variables of Φ except yᵢ . ∂yj ∂yk {yi̸=k } ∂yk ∂yj {yi̸=j } {yi̸=j } {yi̸=k } This yields expressions for various thermodynamic parameters in terms of the derivatives of the potentials with re[1][9] There will be spect to their natural variables. These equations are known From these we get the Maxwell relations. (D − 1)/2 of them for each potential giving a total of D(D as equations of state since they specify parameters of the [8] − 1)/2 equations in all. If we restrict ourselves the U, F, H, thermodynamic state. If we restrict ourselves to the poG tentials U, F, H and G, then we have: ( +T = ( −p =

∂U ∂S

∂U ∂V

)

( = V,{Ni }

)

( =

S,{Ni }

∂H ∂S

∂F ∂V

)

(

p,{Ni }

) T,{Ni }

(

∂T ∂V ∂T ∂p

)

( S,{Ni }

=−

)

( =+ S,{Ni }

∂p ∂S ∂V ∂S

) V,{Ni }

) p,{Ni }

210 ( (

∂S ∂V

CHAPTER 9. CHAPTER 9. POTENTIALS )

( =+

)

T,{Ni }

(

∂p ∂T

) )

G=



µi Ni

i

V,{Ni }

As in the above sections, this process can be carried out on all of the other thermodynamic potentials. Note that the EuT,{Ni } p,{Ni } ler integrals are sometimes also referred to as fundamental Using the equations of state involving the chemical potential equations. we get equations such as: (

∂S ∂p

∂T ∂Nj

=−

∂V ∂T

(

)

∂µj ∂S

9.1.7

)

The Gibbs–Duhem relation

Deriving the Gibbs–Duhem equation from basic thermodynamic state equations is straightforward.[7][10][11] Equating any thermodynamic potential definition with its Euler inteand using the other potentials we can get equations such as: gral expression yields: ( (

∂Nj ∂V ∂Nj ∂Nk

=

V,S,{Ni̸=j }

)

(

S,µj ,{Ni̸=j }

=−

V,{Ni }

∂p ∂µj

)

( S,V,µj ,{Ni̸=j,k }

=−

) U = TS − PV +



S,V {Ni̸=j }

∂µk ∂µj

)

Differentiating, and using the second law:

S,V {Ni̸=j }

dU = T dS − P dV +

9.1.6

µi Ni

i



µi dNi

i

Euler integrals

yields: Again, define xᵢ and yᵢ to be conjugate pairs, and the yᵢ to be the natural variables of the internal energy. Since all of ∑ Ni dµi the natural variables of the internal energy U are extensive 0 = SdT − V dP + i quantities Which is the Gibbs–Duhem relation. The Gibbs–Duhem is a relationship among the intensive parameters of the system. It follows that for a simple system with I components, U ({αyi }) = αU ({yi }) there will be I + 1 independent parameters, or degrees of it follows from Euler’s homogeneous function theorem that freedom. For example, a simple system with a single comthe internal energy can be written as: ponent will have two degrees of freedom, and may be specified by only two parameters, such as pressure and volume for example. The law is named after Josiah Willard Gibbs ∑ ( ∂U ) and Pierre Duhem. U ({yi }) = yj ∂yj {yi̸=j } j From the equations of state, we then have:

9.1.8

Chemical reactions

Changes in these quantities are useful for assessing the degree to which a chemical reaction will proceed. The releU = T S − pV + µ i Ni vant quantity depends on the reaction conditions, as shown i in the following table. Δ denotes the change in the potential Substituting into the expressions for the other main poten- and at equilibrium the change will be zero. tials we have: Most commonly one considers reactions at constant p and T, so the Gibbs free energy is the most useful potential in studies of chemical reactions. ∑ F = −pV + µi Ni ∑

i

H = TS +

∑ i

9.1.9 µi Ni

See also

• Coomber’s relationship

9.2. ENTHALPY

9.1.10

Notes

211

9.1.13

External links

[1] Alberty (2001) p. 1353

• Thermodynamic Potentials – Georgia State University

[2] Alberty (2001) p. 1376

• Chemical Potential Energy: The 'Characteristic' vs the Concentration-Dependent Kind

[3] ISO/IEC 80000-5:2007, item 5-20.4 [4] Callen (1985) p. 153 [5] Alberty (2001) p. 1352 [6] Alberty (2001) p. 1355

9.2

Enthalpy

Not to be confused with Entropy.

[7] Alberty (2001) p. 1354 [8] Callen (1985) p. 37 [9] Callen (1985) p. 181 [10] Moran & Shapiro, p. 538 [11] Callen (1985) p. 60

Enthalpy i /ˈɛnθəlpi/ is a measurement of energy in a thermodynamic system. It includes the internal energy, which is the energy required to create a system, and the amount of energy required to make room for it by displacing its environment and establishing its volume and pressure.[1]

Enthalpy is defined as a state function that depends only on the prevailing equilibrium state identified by the variables 9.1.11 References internal energy, pressure, and volume. It is an extensive quantity. The unit of measurement for enthalpy in the • Alberty, R. A. (2001). “Use of Legendre transforms in International System of Units (SI) is the joule, but other hischemical thermodynamics” (PDF). Pure Appl. Chem. torical, conventional units are still in use, such as the British 73 (8): 1349–1380. doi:10.1351/pac200173081349. thermal unit and the calorie. • Callen, Herbert B. (1985). Thermodynamics and an The enthalpy is the preferred expression of system energy Introduction to Thermostatistics (2nd ed.). New York: changes in many chemical, biological, and physical measurements at constant pressure, because it simplifies the deJohn Wiley & Sons. ISBN 0-471-86256-8. scription of energy transfer. At constant pressure, the en• Moran, Michael J.; Shapiro, Howard N. (1996). Fun- thalpy change equals the energy transferred from the endamentals of Engineering Thermodynamics (3rd ed.). vironment through heating or work other than expansion New York ; Toronto: J. Wiley & Sons. ISBN 0-471- work. 07681-3. The total enthalpy, H, of a system cannot be measured directly. The same situation exists in classical mechanics: only a change or difference in energy carries physi9.1.12 Further reading cal meaning. Enthalpy itself is a thermodynamic poten• McGraw Hill Encyclopaedia of Physics (2nd Edition), tial, so in order to measure the enthalpy of a system, we C.B. Parker, 1994, ISBN 0-07-051400-3 must refer to a defined reference point; therefore what we measure is the change in enthalpy, ΔH. The ΔH is a posi• Thermodynamics, From Concepts to Applications tive change in endothermic reactions, and negative in heat(2nd Edition), A. Shavit, C. Gutfinger, CRC Press releasing exothermic processes. (Taylor and Francis Group, USA), 2009, ISBN For processes under constant pressure, ΔH is equal to 9781420073683 the change in the internal energy of the system, plus the • Chemical Thermodynamics, D.J.G. Ives, Univer- pressure-volume work that the system has done on its sity Chemistry, Macdonald Technical and Scientific, surroundings.[2] This means that the change in enthalpy un1971, ISBN 0-356-03736-3 der such conditions is the heat absorbed (or released) by • Elements of Statistical Thermodynamics (2nd Edition), the material through a chemical reaction or by external L.K. Nash, Principles of Chemistry, Addison-Wesley, heat transfer. Enthalpies for chemical substances at constant pressure assume standard state: most commonly 1 bar 1974, ISBN 0-201-05229-6 pressure. Standard state does not, strictly speaking, specify • Statistical Physics (2nd Edition), F. Mandl, Manch- a temperature (see standard state), but expressions for enester Physics, John Wiley & Sons, 2008, ISBN thalpy generally reference the standard heat of formation at 9780471566588 25 °C.

212

CHAPTER 9. CHAPTER 9. POTENTIALS

Enthalpy of ideal gases and incompressible solids and liquids does not depend on pressure, unlike entropy and Gibbs energy. Real materials at common temperatures and pressures usually closely approximate this behavior, which greatly simplifies enthalpy calculation and use in practical designs and analyses.

enthalpy h = H/m, where m is the mass of the system, or the molar enthalpy H = H/n, where n is the number of moles (h and H are intensive properties). For inhomogeneous systems the enthalpy is the sum of the enthalpies of the composing subsystems:

9.2.1

H=

Origins

The word enthalpy is based on the Ancient Greek verb enthalpein (ἐνθάλπειν), which means “to warm in”.[3] It comes from the Classical Greek prefix ἐν- en-, meaning “to put into”, and the verb θάλπειν thalpein, meaning “to heat”. The word enthalpy is often incorrectly attributed to Benoît Paul Émile Clapeyron and Rudolf Clausius through the 1850 publication of their Clausius–Clapeyron relation. This misconception was popularized by the 1927 publication of The Mollier Steam Tables and Diagrams. However, neither the concept, the word, nor the symbol for enthalpy existed until well after Clapeyron’s death.



Hk ,

k

where the label k refers to the various subsystems. In case of continuously varying p, T, and/or composition, the summation becomes an integral: ∫ H=

ρh dV,

where ρ is the density.

The enthalpy of homogeneous systems can be viewed as function H(S,p) of the entropy S and the pressure p, and a The earliest writings to contain the concept of enthalpy did differential relation for it can be derived as follows. We start not appear until 1875,[4] when Josiah Willard Gibbs intro- from the first law of thermodynamics for closed systems for duced “a heat function for constant pressure”. However, an infinitesimal process: Gibbs did not use the word “enthalpy” in his writings.[note 1] The actual word first appears in the scientific literature in a 1909 publication by J. P. Dalton. According to that pub- dU = δQ − δW. lication, Heike Kamerlingh Onnes (1853–1926) actually coined the word.[5] Here, δQ is a small amount of heat added to the system, and Over the years, many different symbols were used to de- δW a small amount of work performed by the system. In note enthalpy. It was not until 1922 that Alfred W. Porter a homogeneous system only reversible processes can take proposed the symbol "H" as the accepted standard,[6] thus place, so the second law of thermodynamics gives δQ = T dS, with T the absolute temperature of the system. Furtherfinalizing the terminology still in use today. more, if only pV work is done, δW = p dV. As a result,

9.2.2

Formal definition

The enthalpy of a homogeneous system is defined as[7][8]

dU = T dS − p dV. Adding d(pV) to both sides of this expression gives

H = U + pV, where H is the enthalpy of the system,

dU + d(pV ) = T dS − p dV + d(pV ), or

U is the internal energy of the system, p is the pressure of the system,

d(U + pV ) = T dS + V dp.

V is the volume of the system. So The enthalpy is an extensive property. This means that, for homogeneous systems, the enthalpy is proportional to the size of the system. It is convenient to introduce the specific dH(S, p) = T dS + V dp.

9.2. ENTHALPY

9.2.3

213

Other expressions

and therefore the internal energy is used.[10][11] In basic chemistry, experiments are often conducted at constant The above expression of dH in terms of entropy and pres- atmospheric pressure, and the pressure-volume work repsure may be unfamiliar to some readers. However, there resents an energy exchange with the atmosphere that canare expressions in terms of more familiar variables such as not be accessed or controlled, so that ΔH is the expression temperature and pressure:[9][7]:88 chosen for the heat of reaction.

dH = Cp dT + V (1 − αT ) dp.

9.2.5

Relationship to heat

Here Cp is the heat capacity at constant pressure and α is In order to discuss the relation between the enthalpy increase and heat supply, we return to the first law for closed the coefficient of (cubic) thermal expansion: systems: dU = δQ − δW. We apply it to the special case with a uniform pressure at the surface. In this case the work term ( ) 1 ∂V can be split into two contributions, the so-called pV work, . α= V ∂T p given by p dV (where here p is the pressure at the surface, dV is the increase of the volume of the system) and all other With this expression one can, in principle, determine the types of work δW′, such as by a shaft or by electromagnetic enthalpy if Cp and V are known as functions of p and T. interaction. So we write δW = p dV + δW′. In this case the first law reads: Note that for an ideal gas, αT = 1,[note 2] so that dU = δQ − p dV − δW ′ ,

dH = Cp dT.

In a more general form, the first law describes the internal or energy with additional terms involving the chemical potential and the number of particles of various types. The differential statement for dH then becomes dH = δQ + V dp − δW ′ .

dH = T dS + V dp +



µi dNi ,

From this relation we see that the increase in enthalpy of a system is equal to the added heat:

i

where μi is the chemical potential per particle for an i-type particle, and Ni is the number of such particles. The last term can also be written as μi dni (with dni the number of moles of component i added to the system and, in this case, μi the molar chemical potential) or as μi dmi (with dmi the mass of component i added to the system and, in this case, μi the specific chemical potential).

9.2.4

dH = δQ, provided that the system is under constant pressure (dp = 0) and that the only work done by the system is expansion work (δW = 0).[12]

9.2.6

Applications

Physical interpretation

In thermodynamics, one can calculate enthalpy by determining the requirements for creating a system from “nothThe U term can be interpreted as the energy required to ingness"; the mechanical work required, pV, differs based create the system, and the pV term as the energy that would upon the conditions that obtain during the creation of the be required to “make room” for the system if the pressure thermodynamic system. of the environment remained constant. When a system, for Energy must be supplied to remove particles from the surexample, n moles of a gas of volume V at pressure p and roundings to make space for the creation of the system, astemperature T, is created or brought to its present state from suming that the pressure p remains constant; this is the pV absolute zero, energy must be supplied equal to its internal term. The supplied energy must also provide the change energy U plus pV, where pV is the work done in pushing in internal energy, U, which includes activation energies, against the ambient (atmospheric) pressure. ionization energies, mixing energies, vaporization energies, In basic physics and statistical mechanics it may be more chemical bond energies, and so forth. Together, these coninteresting to study the internal properties of the system stitute the change in the enthalpy U + pV. For systems at

214

CHAPTER 9. CHAPTER 9. POTENTIALS

constant pressure, with no external work done other than Enthalpy changes the pV work, the change in enthalpy is the heat received by An enthalpy change describes the change in enthalpy obthe system. served in the constituents of a thermodynamic system when For a simple system, with a constant number of particles, undergoing a transformation or chemical reaction. It is the the difference in enthalpy is the maximum amount of therdifference between the enthalpy after the process has commal energy derivable from a thermodynamic process in pleted, i.e. the enthalpy of the products, and the initial enwhich the pressure is held constant. thalpy of the system, i.e. the reactants. These processes are reversible and the enthalpy for the reverse process is the negative value of the forward change. Heat of reaction A common standard enthalpy change is the enthalpy of formation, which has been determined for a large number of substances. Enthalpy changes are routinely measured and The total enthalpy of a system cannot be measured directly, compiled in chemical and physical reference works, such as the enthalpy change of a system is measured instead. En- the CRC Handbook of Chemistry and Physics. The following is a selection of enthalpy changes commonly recognized thalpy change is defined by the following equation: in thermodynamics. Main article: Standard enthalpy of reaction

∆H = Hf − Hi , where ΔH is the “enthalpy change”,

When used in these recognized terms the qualifier change is usually dropped and the property is simply termed enthalpy of 'process’. Since these properties are often used as reference values it is very common to quote them for a standardized set of environmental parameters, or standard conditions, including:

H is the final enthalpy of the system (in a chemical reaction, the enthalpy of the products),

• A temperature of 25 °C or 298 K,

Hᵢ is the initial enthalpy of the system (in a chemical reaction, the enthalpy of the reactants).

• A pressure of one atmosphere (1 atm or 101.325 kPa),

For an exothermic reaction at constant pressure, the system’s change in enthalpy equals the energy released in the reaction, including the energy retained in the system and lost through expansion against its surroundings. In a similar manner, for an endothermic reaction, the system’s change in enthalpy is equal to the energy absorbed in the reaction, including the energy lost by the system and gained from compression from its surroundings. A relatively easy way to determine whether or not a reaction is exothermic or endothermic is to determine the sign of ΔH. If ΔH is positive, the reaction is endothermic, that is heat is absorbed by the system due to the products of the reaction having a greater enthalpy than the reactants. On the other hand, if ΔH is negative, the reaction is exothermic, that is the overall decrease in enthalpy is achieved by the generation of heat. Specific enthalpy The specific enthalpy of a uniform system is defined as h = H/m where m is the mass of the system. The SI unit for specific enthalpy is joule per kilogram. It can be expressed in other specific quantities by h = u + pv, where u is the specific internal energy, p is the pressure, and v is specific volume, which is equal to 1/ρ, where ρ is the density.

• A concentration of 1.0 M when the element or compound is present in solution, • Elements or compounds in their normal physical states, i.e. standard state. For such standardized values the name of the enthalpy is commonly prefixed with the term standard, e.g. standard enthalpy of formation. Chemical properties: • Enthalpy of reaction, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of substance reacts completely. • Enthalpy of formation, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a compound is formed from its elementary antecedents. • Enthalpy of combustion, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a substance burns completely with oxygen. • Enthalpy of hydrogenation, defined as the enthalpy change observed in a constituent of a thermodynamic

9.2. ENTHALPY

215

system when one mole of an unsaturated compound energy of a system is equal to the amount of energy added reacts completely with an excess of hydrogen to form to the system by matter flowing in and by heating, minus the a saturated compound. amount lost by matter flowing out and in the form of work done by the system: • Enthalpy of atomization, defined as the enthalpy change required to atomize one mole of compound completely. dU = δQ + dUin − dUout − δW, • Enthalpy of neutralization, defined as the enthalpy where Uᵢ is the average internal energy entering the system, change observed in a constituent of a thermodynamic and Uₒᵤ is the average internal energy leaving the system. system when one mole of water is formed when an acid and a base react. Heat added Q

• Standard Enthalpy of solution, defined as the enthalpy change observed in a constituent of a thermodynamic system when one mole of a solute is dissolved completely in an excess of solvent, so that the solution is at infinite dilution. • Standard enthalpy of Denaturation (biochemistry), defined as the enthalpy change required to denature one mole of compound.

Work performed external to boundary Wshaft

Hout

Hin

System boundary (open) • Enthalpy of hydration, defined as the enthalpy change observed when one mole of gaseous ions are completely dissolved in water forming one mole of aqueous During steady, continuous operation, an energy balance applied to ions.

Physical properties:

an open system equates shaft work performed by the system to heat added plus net enthalpy added

The region of space enclosed by the boundaries if the open • Enthalpy of fusion, defined as the enthalpy change re- system is usually called a control volume, and it may or may quired to completely change the state of one mole of not correspond to physical walls. If we choose the shape substance between solid and liquid states. of the control volume such that all flow in or out occurs perpendicular to its surface, then the flow of matter into the • Enthalpy of vaporization, defined as the enthalpy system performs work as if it were a piston of fluid pushing change required to completely change the state of one mass into the system, and the system performs work on the mole of substance between liquid and gaseous states. flow of matter out as if it were driving a piston of fluid. • Enthalpy of sublimation, defined as the enthalpy There are then two types of work performed: flow work change required to completely change the state of one described above, which is performed on the fluid (this is also often called pV work), and shaft work, which may be mole of substance between solid and gaseous states. performed on some mechanical device. • Lattice enthalpy, defined as the energy required to sepThese two types of work are expressed in the equation arate one mole of an ionic compound into separated gaseous ions to an infinite distance apart (meaning no force of attraction). δW = d(pout Vout ) − d(pin Vin ) + δWshaft . • Enthalpy of mixing, defined as the enthalpy change upon mixing of two (non-reacting) chemical sub- Substitution into the equation above for the control volume (cv) yields: stances. Open systems

dUcv = δQ+dUin +d(pin Vin )−dUout −d(pout Vout )−δWshaft .

In thermodynamic open systems, matter may flow in and The definition of enthalpy, H, permits us to use this out of the system boundaries. The first law of thermody- thermodynamic potential to account for both internal ennamics for open systems states: The increase in the internal ergy and pV work in fluids for open systems:

216

CHAPTER 9. CHAPTER 9. POTENTIALS

dUcv = δQ + dHin − dHout − δWshaft . If we allow also the system boundary to move (e.g. due to moving pistons), we get a rather general form of the first law for open systems.[13] In terms of time derivatives it reads: ∑ ∑ ∑ dVk dU = Q˙ k + H˙ k − pk − P, dt dt k

k

k

with sums over the various places k where heat is supplied, matter flows into the system, and boundaries are moving. The Ḣk terms represent enthalpy flows, which can be writ- T–s diagram of nitrogen.[14] The red curve at the left is the meltten as ing curve. The red dome represents the two-phase region with the

H˙ k = hk m ˙ k = Hm n˙ k , with ṁk the mass flow and ṅk the molar flow at position k respectively. The term dVk/dt represents the rate of change of the system volume at position k that results in pV power done by the system. The parameter P represents all other forms of power done by the system such as shaft power, but it can also be e.g. electric power produced by an electrical power plant.

low-entropy side the saturated liquid and the high-entropy side the saturated gas. The black curves give the T–s relation along isobars. The pressures are indicated in bar. The blue curves are isenthalps (curves of constant enthalpy). The values are indicated in blue in kJ/kg. The specific points a, b, etc., are treated in the main text.

of the most common diagrams is the temperature–specific entropy diagram (T–s-diagram). It gives the melting curve and saturated liquid and vapor values together with isobars and isenthalps. These diagrams are powerful tools in the hands of the thermal engineer.

Note that the previous expression holds true only if the kinetic energy flow rate is conserved between system inlet and outlet. Otherwise, it has to be included in the enthalpy bal- Some basic applications ance. During steady-state operation of a device (see turbine, pump, and engine), the average dU/dt may be set equal to The points a through h in the figure play a role in the diszero. This yields a useful expression for the average power cussion in this section. generation for these devices in the absence of chemical reactions: a: T = 300 K, p = 1 bar, s = 6.85 kJ/(kg K), h = 461 kJ/kg; ∑ ⟨ ⟩ ∑ ⟨ ⟩ ∑ ⟨ dVk ⟩ b: T = 380 K, p = 2 bar, s = 6.85 kJ/(kg K), h = pk H˙ k − P = Q˙ k + , dt 530 kJ/kg; k

k

k

where the angle brackets denote time averages. The technical importance of the enthalpy is directly related to its presence in the first law for open systems, as formulated above.

9.2.7

Diagrams

Nowadays the enthalpy values of important substances can be obtained using commercial software. Practically all relevant material properties can be obtained either in tabular or in graphical form. There are many types of diagrams, such as h–T diagrams, which give the specific enthalpy as function of temperature for various pressures, and h–p diagrams, which give h as function of p for various T. One

c: T = 300 K, p = 200 bar, s = 5.16 kJ/(kg K), h = 430 kJ/kg; d: T = 270 K, p = 1 bar, s = 6.79 kJ/(kg K), h = 430 kJ/kg; e: T = 108 K, p = 13 bar, s = 3.55 kJ/(kg K), h = 100 kJ/kg (saturated liquid at 13 bar); f: T = 77.2 K, p = 1 bar, s = 3.75 kJ/(kg K), h = 100 kJ/kg; g: T = 77.2 K, p = 1 bar, s = 2.83 kJ/(kg K), h = 28 kJ/kg (saturated liquid at 1 bar); h: T = 77.2 K, p = 1 bar, s = 5.41 kJ/(kg K), h = 230 kJ/kg (saturated gas at 1 bar);

9.2. ENTHALPY

217

Throttling

and T = 108 K. Throttling from this point to a pressure of 1 bar ends in the two-phase region (point f). This means Main article: Joule–Thomson effect that a mixture of gas and liquid leaves the throttling valve. One of the simple applications of the concept of enthalpy Since the enthalpy is an extensive parameter, the enthalpy in f (h ) is equal to the enthalpy in g (h ) multiplied by the liquid fraction in f (x ) plus the enthalpy in h (h ) multiplied by the gas fraction in f (1 − x ). So hf = xf hg + (1 − xf )hh . With numbers: 100 = x × 28 + (1 − x ) × 230, so x = 0.64. This means that the mass fraction of the liquid in the liquid–gas mixture that leaves the throttling valve is 64%. Schematic diagram of a throttling in the steady state. Fluid enters the system (dotted rectangle) at point 1 and leaves it at point 2. The mass flow is ṁ.

is the so-called throttling process, also known as JouleThomson expansion. It concerns a steady adiabatic flow of a fluid through a flow resistance (valve, porous plug, or any other type of flow resistance) as shown in the figure. This process is very important, since it is at the heart of domestic refrigerators, where it is responsible for the temperature drop between ambient temperature and the interior of the refrigerator. It is also the final stage in many types of liquefiers. In the first law for open systems (see above) applied to the system, all terms are zero, except the terms for the enthalpy flow. Hence

0 = mh ˙ 1 − mh ˙ 2.

Compressors Main article: Gas compressor A power P is applied e.g. as electrical power. If the com-

Schematic diagram of a compressor in the steady state. Fluid enters the system (dotted rectangle) at point 1 and leaves it at point 2. The mass flow is ṁ. A power P is applied and a heat flow Q̇ is released to the surroundings at ambient temperature Ta.

Since the mass flow is constant, the specific enthalpies at the pression is adiabatic, the gas temperature goes up. In the two sides of the flow resistance are the same: reversible case it would be at constant entropy, which corresponds with a vertical line in the T–s diagram. For example, compressing nitrogen from 1 bar (point a) to 2 bar h1 = h2 , (point b) would result in a temperature increase from 300 K to 380 K. In order to let the compressed gas exit at ambithat is, the enthalpy per unit mass does not change during ent temperature Tₐ, heat exchange, e.g. by cooling water, is the throttling. The consequences of this relation can be necessary. In the ideal case the compression is isothermal. demonstrated using the T–s diagram above. Point c is at The average heat flow to the surroundings is Q̇. Since the 200 bar and room temperature (300 K). A Joule–Thomson system is in the steady state the first law gives expansion from 200 bar to 1 bar follows a curve of constant enthalpy of roughly 425 kJ/kg (not shown in the diagram) lying between the 400 and 450 kJ/kg isenthalps and ends in 0 = −Q˙ + mh ˙ 1 − mh ˙ 2 + P. point d, which is at a temperature of about 270 K. Hence the expansion from 200 bar to 1 bar cools nitrogen from The minimal power needed for the compression is realized 300 K to 270 K. In the valve, there is a lot of friction, and if the compression is reversible. In that case the second law a lot of entropy is produced, but still the final temperature of thermodynamics for open systems gives is below the starting value! Point e is chosen so that it is on the saturated liquid line Q˙ 0 = − + ms ˙ 1 − ms ˙ 2. with h = 100 kJ/kg. It corresponds roughly with p = 13 bar Ta

218

CHAPTER 9. CHAPTER 9. POTENTIALS

Eliminating Q̇ gives for the minimal power

For example, compressing 1 kg of nitrogen from 1 bar to 200 bar costs at least (h − hₐ) − Tₐ(s − sₐ). With the data, obtained with the T–s diagram, we find a value of (430 − 461) − 300 × (5.16 − 6.85) = 476 kJ/kg. The relation for the power can be further simplified by writing it as ∫

(dh − Ta ds). 1

9.2.8



[2] Van Wylen, G. J.; Sonntag, R. E. (1985). “Section 5.5”. Fundamentals of Classical Thermodynamics (3rd ed.). New York, NY: John Wiley & Sons. ISBN 0-471-82933-1. [3] "ἐνθάλπω". A Greek–English Lexicon. [4] Henderson, Douglas; Eyring, Henry; Jost, Wilhelm (1967). Physical Chemistry: An Advanced Treatise. Academic Press. p. 29. [5] Laidler, Keith (1995). The World of Physical Chemistry. Oxford University Press. p. 110.

2

With dh = T ds + v dp, this results in the final relation

Pmin = m ˙

References

[1] Zemansky, Mark W. (1968). “Chapter 11”. Heat and Thermodynamics (5th ed.). New York, NY: McGraw-Hill. p. 275.

Pmin = h2 − h1 − Ta (s2 − s1 ). m ˙

Pmin = m ˙

9.2.10

[6] Howard, Irmgard (2002). "H Is for Enthalpy, Thanks to Heike Kamerlingh Onnes and Alfred W. Porter”. Journal of Chemical Education (ACS Publications) 79 (6): 697. Bibcode:2002JChEd..79..697H. doi:10.1021/ed079p697. [7] Guggenheim, E. A. (1959). Thermodynamics. Amsterdam: North-Holland Publishing Company.

2

v dp. 1

[8] Zumdahl, Steven S. (2008). “Thermochemistry”. Chemistry. Cengage Learning. p. 243. ISBN 978-0547-12532-9.

See also

• Standard enthalpy change of formation (data table) • Calorimetry

[9] Moran, M. J.; Shapiro, H. N. (2006). Fundamentals of Engineering Thermodynamics (5th ed.). John Wiley & Sons. p. 511. [10] Reif, F. (1967). Statistical Physics. London: McGraw-Hill.

• Calorimeter

[11] Kittel, C.; Kroemer, H. (1980). Thermal Physics. London: Freeman.

• Departure function

[12] Ebbing, Darrel; Gammon, Steven (2010). General Chemistry. Cengage Learning. p. 231. ISBN 978-0-538-497527.

• Hess’s law • Isenthalpic process • Stagnation enthalpy

[13] Moran, M. J.; Shapiro, H. N. (2006). Fundamentals of Engineering Thermodynamics (5th ed.). John Wiley & Sons. p. 129.

• Thermodynamic databases for pure substances

[14] Figure composed with data obtained with RefProp, NIST Standard Reference Database 23.

• Entropy

9.2.9

9.2.11

Notes

[1] The Collected Works of J. Willard Gibbs, Vol. I do not contain reference to the word enthalpy, but rather reference the heat function for constant pressure. [2] αT =

T V

(

∂( nRT P ∂T

)

) = p

nRT PV

=1

Bibliography

• Dalton, J.P. (1909). “Researches on the Joule–Kelvin effect, especially at low temperatures. I. Calculations for hydrogen” (PDF). KNAW Proceedings 11: 863– 873. • Haase, R. (1971). Jost, W., ed. Physical Chemistry: An Advanced Treatise. New York, NY: Academic. p. 29.

9.3. INTERNAL ENERGY

219

• Gibbs, J. W. The Collected Works of J. Willard Gibbs, chain of thermodynamic operations and thermodynamic Vol. I (1948 ed.). New Haven, CT: Yale University processes by which the given state can be prepared, startPress. p. 88.. ing with a reference state which is customarily assigned a reference value for its internal energy. Such a chain, or • Howard, I. K. (2002). "H Is for Enthalpy, Thanks to path, can be theoretically described by certain extensive Heike Kamerlingh Onnes and Alfred W. Porter”. J. state variables of the system, namely, its entropy, S, its volChem. Educ. 79: 697–698. doi:10.1021/ed079p697. ume, V, and its mole numbers, {Nj}. The internal energy, • Laidler, K. (1995). The World of Physical Chemistry. U(S,V,{Nj}), is a function of those. Sometimes, to that list are appended other extensive state variables, for examOxford: Oxford University Press. p. 110. ple electric dipole moment. For practical considerations in • Kittel, C.; Kroemer, H. (1980). Thermal Physics. thermodynamics and engineering it is rarely necessary or New York, NY: S. R. Furphy & Co. p. 246. convenient to consider all energies belonging to the total intrinsic energy of a system, such as the energy given by • DeHoff, R. (2006). Thermodynamics in Materials Scithe equivalence of mass. Customarily, thermodynamic deence (2nd ed.). New York, NY: Taylor and Francis scriptions include only items relevant to the processes unGroup. der study. Thermodynamics is chiefly concerned only with changes in the internal energy, not with its absolute value.

9.2.12

External links

The internal energy is a state function of a system, because its value depends only on the current state of the system • Enthalpy - Eric Weisstein’s World of Physics and not on the path taken or processes undergone to prepare it. It is an extensive quantity. It is the one and only • Enthalpy - Georgia State University cardinal thermodynamic potential.[4] Through it, by use of • Enthalpy example calculations - Texas A&M Univer- Legendre transforms, are mathematically constructed the sity Chemistry Department other thermodynamic potentials. These are functions of variable lists in which some extensive variables are replaced by their conjugate intensive variables. Legendre transformation is necessary because mere substitutive replacement 9.3 Internal energy of extensive variables by intensive variables does not lead In thermodynamics, the internal energy of a system is the to thermodynamic potentials. Mere substitution leads to a energy contained within the system, excluding the kinetic less informative formula, an equation of state. energy of motion of the system as a whole and the potential Though it is a macroscopic quantity, internal energy can energy of the system as a whole due to external force fields. be explained in microscopic terms by two theoretical virIt keeps account of the gains and losses of energy of the tual components. One is the microscopic kinetic energy system that are due to changes in its internal state.[1][2] due to the microscopic motion of the system’s particles The internal energy of a system can be changed by transfers (translations, rotations, vibrations). The other is the potenof matter and by work and heat transfer.[3] When matter tial energy associated with the microscopic forces, includtransfer is prevented by impermeable containing walls, the ing the chemical bonds, between the particles; this is for system is said to be closed. Then the first law of thermo- ordinary physics and chemistry. If thermonuclear reactions dynamics states that the increase in internal energy is equal are specified as a topic of concern, then the static rest mass to the total heat added plus the work done on the system by energy of the constituents of matter is also counted. There its surroundings. If the containing walls pass neither mat- is no simple universal relation between these quantities of ter nor energy, the system is said to be isolated. Then its microscopic energy and the quantities of energy gained or internal energy cannot change. The first law of thermody- lost by the system in work, heat, or matter transfer. namics may be regarded as establishing the existence of the The SI unit of energy is the joule (J). Sometimes it is conveinternal energy. nient to use a corresponding density called specific internal The internal energy is one of the two cardinal state functions energy which is internal energy per unit of mass (kilogram) of the system in question. The SI unit of specific internal of the state variables of a thermodynamic system. energy is J/kg. If the specific internal energy is expressed relative to units of amount of substance (mol), then it is referred to as molar internal energy and the unit is J/mol. 9.3.1 Introduction From the standpoint of statistical mechanics, the internal The internal energy of a given state of a system cannot be di- energy is equal to the ensemble average of the sum of the rectly measured. It is determined through some convenient

220

CHAPTER 9. CHAPTER 9. POTENTIALS

microscopic kinetic and potential energies of the system.

energy needed to create the given state of the system from the reference state.

From a non-relativistic microscopic point of view, it may be divided into microscopic potential energy, U ᵢ ᵣₒ ₒ , and The internal energy, U(S,V,{Nj}), expresses the thermody- microscopic kinetic energy, U ᵢ ᵣₒ ᵢ , components: namics of a system in the energy-language, or in the energy representation. Its arguments are exclusively extensive variables of state. Alongside the internal energy, the other U = Umicro pot + Umicro kin cardinal function of state of a thermodynamic system is its entropy, as a function, S(U,V,{Nj}), of the same list of ex- The microscopic kinetic energy of a system arises as the tensive variables of state, except that the entropy, S, is re- sum of the motions of all the system’s particles with replaced in the list by the internal energy, U. It expresses the spect to the center-of-mass frame, whether it be the moentropy representation.[4][5][6] tion of atoms, molecules, atomic nuclei, electrons, or other Each cardinal function is a monotonic function of each particles. The microscopic potential energy algebraic sumof its natural or canonical variables. Each provides its mative components are those of the chemical and nuclear characteristic or fundamental equation, for example U = particle bonds, and the physical force fields within the sysU(S,V,{Nj}), that by itself contains all thermodynamic in- tem, such as due to internal induced electric or magnetic formation about the system. The fundamental equations for dipole moment, as well as the energy of deformation of the two cardinal functions can in principle be interconverted solids (stress-strain). Usually, the split into microscopic kiby solving, for example, U = U(S,V,{Nj}) for S, to get S = netic and potential energies is outside the scope of macroscopic thermodynamics. S(U,V,{Nj}). Cardinal functions

In contrast, Legendre transforms are necessary to derive fundamental equations for other thermodynamic potentials and Massieu functions. The entropy as a function only of extensive state variables is the one and only cardinal function of state for the generation of Massieu functions. It is not itself customarily designated a 'Massieu function', though rationally it might be thought of as such, corresponding to the term 'thermodynamic potential', which includes the internal energy.[5][7][8] For real and practical systems, explicit expressions of the fundamental equations are almost always unavailable, but the functional relations exist in principle. Formal, in principle, manipulations of them are valuable for the understanding of thermodynamics.

Internal energy does not include the energy due to motion or location of a system as a whole. That is to say, it excludes any kinetic or potential energy the body may have because of its motion or location in external gravitational, electrostatic, or electromagnetic fields. It does, however, include the contribution of such a field to the energy due to the coupling of the internal degrees of freedom of the object with the field. In such a case, the field is included in the thermodynamic description of the object in the form of an additional external parameter.

For practical considerations in thermodynamics or engineering, it is rarely necessary, convenient, nor even possible, to consider all energies belonging to the total intrinsic energy of a sample system, such as the energy given by the equivalence of mass. Typically, descriptions only include components relevant to the system under study. Indeed, in most systems under consideration, especially through ther9.3.2 Description and definition modynamics, it is impossible to calculate the total internal The internal energy U of a given state of the system is de- energy.[9] Therefore, a convenient null reference point may termined relative to that of a standard state of the system, be chosen for the internal energy. by adding up the macroscopic transfers of energy that ac- The internal energy is an extensive property: it depends on company a change of state from the reference state to the the size of the system, or on the amount of substance it congiven state: tains.

∆U =



Ei

i

where ΔU denotes the difference between the internal energy of the given state and that of the reference state, and the Ei are the various energies transferred to the system in the steps from the reference state to the given state. It is the

At any temperature greater than absolute zero, microscopic potential energy and kinetic energy are constantly converted into one another, but the sum remains constant in an isolated system (cf. table). In the classical picture of thermodynamics, kinetic energy vanishes at zero temperature and the internal energy is purely potential energy. However, quantum mechanics has demonstrated that even at zero temperature particles maintain a residual energy of motion, the zero

9.3. INTERNAL ENERGY

221

point energy. A system at absolute zero is merely in its quantum-mechanical ground state, the lowest energy state available. At absolute zero a system of given composition has attained its minimum attainable entropy.

A second mechanism of change of internal energy of a closed system is the doing of work on the system, either in mechanical form by changing pressure or volume, or by other perturbations, such as directing an electric current The microscopic kinetic energy portion of the internal en- through the system. ergy gives rise to the temperature of the system. Statistical If the system is not closed, the third mechanism that can inmechanics relates the pseudo-random kinetic energy of in- crease the internal energy is transfer of matter into the sysdividual particles to the mean kinetic energy of the entire tem. This increase, ΔU ₐ ₑᵣ cannot be split into heat and ensemble of particles comprising a system. Furthermore, it work components. If the system is so set up physically that relates the mean microscopic kinetic energy to the macro- heat and work can be done on it by pathways separate from scopically observed empirical property that is expressed as and independent of matter transfer, then the transfers of entemperature of the system. This energy is often referred to ergy add to change the internal energy: as the thermal energy of a system,[10] relating this energy, like the temperature, to the human experience of hot and cold. ∆U = Q + Wpressure−volume + Wisochoric + ∆Umatter Statistical mechanics considers any system to be statistically (separate pathway for matter transfer from heat and work transfer pathway distributed across an ensemble of N microstates. Each microstate has an energy Eᵢ and is associated with a probabil- If a system undergoes certain phase transformations while ity pᵢ. The internal energy is the mean value of the system’s being heated, such as melting and vaporization, it may total energy, i.e., the sum of all microstate energies, each be observed that the temperature of the system does not change until the entire sample has completed the transforweighted by their probability of occurrence: mation. The energy introduced into the system while the temperature did not change is called a latent energy, or latent N ∑ heat, in contrast to sensible heat, which is associated with U= p i Ei . temperature change. i=1

This is the statistical expression of the first law of thermodynamics. 9.3.3 Internal energy changes Thermodynamics is chiefly concerned only with the changes, ΔU, in internal energy.

Internal energy of the ideal gas

Thermodynamics often uses the concept of the ideal gas for teaching purposes, and as an approximation for working systems. The ideal gas is a gas of particles considered as point objects that interact only by elastic collisions and fill a volume such that their free mean path between collisions is much larger than their diameter. Such systems are approximated by the monatomic gases, helium and the other noble gases. Here the kinetic energy consists only of the translational energy of the individual atoms. Monatomic particles do not rotate or vibrate, and are not electronically excited to higher energies except at very high temperatures.

For a closed system, with matter transfer excluded, the changes in internal energy are due to heat transfer Q and due to work. The latter can be split into two kinds, pressurevolume work W ᵣₑ ᵤᵣₑ-ᵥₒ ᵤ ₑ, and frictional and other kinds, such as electrical polarization, which do not alter the volume of the system, and are called isochoric, Wᵢ ₒ ₒᵣᵢ . Accordingly, the internal energy change ΔU for a process may Therefore, internal energy changes in an ideal gas may be written[3] be described solely by changes in its kinetic energy. Kinetic energy is simply the internal energy of the per∆U = Q + Wpressure−volume + fect gas and depends entirely on its pressure, volume and Wisochoric (closed system, no transfer of matter). thermodynamic temperature. [note 1] When a closed system receives energy as heat, this energy increases the internal energy. It is distributed between microscopic kinetic and microscopic potential energies. In general, thermodynamics does not trace this distribution. In an ideal gas all of the extra energy results in a temperature increase, as it is stored solely as microscopic kinetic energy; such heating is said to be sensible.

The internal energy of an ideal gas is proportional to its mass (number of moles) N and to its temperature T

U = cN T, where c is the heat capacity (at constant volume) of the gas. The internal energy may be written as a function of the three

222

CHAPTER 9. CHAPTER 9. POTENTIALS

extensive properties S, V, N (entropy, volume, mass) in the and the change in internal energy becomes following way [11][12] S

U (S, V, N ) = const · e cN V

−R c

N

R+c c

dU = T dS − pdV ,

where const is an arbitrary positive constant and where R is Changes due to temperature and volume the universal gas constant. It is easily seen that U is a linearly homogeneous function of the three variables (that is, it The expression relating changes in internal energy to is extensive in these variables), and that it is weakly convex. changes in temperature and volume is Knowing temperature and pressure to be the derivatives ∂U T = ∂U ∂S , p = − ∂V , the ideal gas law pV = RN T imme[ ( ) ] ∂p diately follows. dU = CV dT + T − p dV (1) . ∂T V

9.3.4

Internal energy of a closed thermody- This is useful if the equation of state is known. namic system In case of an ideal gas, we can derive that dU = Cv dT ,

i.e. the internal energy of an ideal gas can be written as a This above summation of all components of change in inter- function that depends only on the temperature. nal energy assume that a positive energy denotes heat added to the system or work done on the system, while a negative Proof of pressure independence for an ideal gas energy denotes work of the system on the environment. Typically this relationship is expressed in infinitesimal The expression relating changes in internal energy to terms using the differentials of each term. Only the inter- changes in temperature and volume is nal energy is an exact differential. For a system undergoing ] [ ( ) only thermodynamics processes, i.e. a closed system that ∂p can exchange only heat and work, the change in the internal dU = CV dT + T − p dV. ∂T V energy is The equation of state is the ideal gas law dU = δQ + δW which constitutes the first law of thermodynamics.[note 1] It pV = nRT. may be expressed in terms of other thermodynamic paramSolve for pressure: eters. Each term is composed of an intensive variable (a generalized force) and its conjugate infinitesimal extensive variable (a generalized displacement). nRT p= . V For example, for a non-viscous fluid, the mechanical work done on the system may be related to the pressure p and Substitute in to internal energy expression: volume V. The pressure is the intensive generalized force, while the volume is the extensive generalized displacement: [ ( ) ] ∂p nRT dU = CV dT + T − dV. ∂T V V δW = −pdV Take the derivative of pressure with respect to temperature: This defines the direction of work, W, to be energy flow from the working system to the surroundings, indicated by ( ) a negative term.[note 1] Taking the direction of heat transfer ∂p nR = . Q to be into the working fluid and assuming a reversible ∂T V V process, the heat is Replace: δQ = T dS . ] [ T is temperature nRT nRT − dV. dU = C dT + V S is entropy V V

9.3. INTERNAL ENERGY

223

And simplify: Cp = CV + V T dU = CV dT. Derivation of dU in terms of dT and dV To express dU in terms of dT and dV, the term ( dS =

∂S ∂T

)

( dT +

V

∂S ∂V

)

Derivation of dU in terms of dT and dP The partial derivative of the pressure with respect to temperature at constant volume can be expressed in terms of the coefficient of thermal expansion

dV T

is substituted in the fundamental thermodynamic relation

α2 βT

α≡

1 V

(

∂V ∂T

) p

and the isothermal compressibility dU = T dS − pdV. This gives:

βT ≡ −

(

)

1 V

(

∂V ∂p

) T

[ ( ) ] ∂S dT + T − p dV. ∂V T

by writing: ∂S ∂T V ( ∂S ) ( ( ) ) ∂V ∂V The term T ∂T is the heat capacity at constant volume V dp+ dT = V (αdT − βT dp) (2) dV = CV . ∂p T ∂T p dU = T

The partial derivative of S with respect to V can be evaluated if the equation of state is known. From the fundamen- and equating dV to zero and solving for the ratio dp/dT. tal thermodynamic relation, it follows that the differential This gives: of the Helmholtz free energy A is given by: ( ∂V ) ( ) ∂p α ∂T p = −( ) = (3) ∂V ∂T β T dA = −SdT − pdV. V ∂p

T

The symmetry of second derivatives of A with respect to T Substituting (2) and (3) in (1) gives the above expression. and V yields the Maxwell relation: (

∂S ∂V

)

( = T

∂p ∂T

)

Changes due to volume at constant temperature .

V

This gives the expression above.

The internal pressure is defined as a partial derivative of the internal energy with respect to the volume at constant temperature:

Changes due to temperature and pressure When dealing with fluids or solids, an expression in terms of the temperature and pressure is usually more useful:

( πT =

9.3.5 dU = (Cp − αpV ) dT + (βT p − αT ) V dp

∂U ∂V

) T

Internal energy of multi-component systems

where it is assumed that the heat capacity at constant pres- In addition to including the entropy S and volume V terms in sure is related to the heat capacity at constant volume ac- the internal energy, a system is often described also in terms cording to: of the number of particles or chemical species it contains:

224

CHAPTER 9. CHAPTER 9. POTENTIALS The sum over the composition of the system is the Gibbs free energy:

U = U (S, V, N1 , . . . , Nn ) ∑ where N are the molar amounts of constituents of type j in µi Ni the system. The internal energy is an extensive function of G = i the extensive variables S, V, and the amounts N , the internal energy may be written as a linearly homogeneous function that arises from changing the composition of the system at of first degree: constant temperature and pressure. For a single component system, the chemical potential equals the Gibbs energy per amount of substance, i.e. particles or moles according to U (αS, αV, αN1 , αN2 , . . .) = αU (S, V, N1 , N2 , . . .) the original definition of the unit for {Nj } . where α is a factor describing the growth of the system. The differential internal energy may be written as

9.3.6

Internal energy in an elastic medium

For an elastic medium the mechanical energy term of the internal energy must be replaced by the more general ex∑ ∂U ∂U ∂U dU = dS + dV + dN = T dS − p dV + pression involving the stress σij and strain εij . The ini i ∂Ni ∂S ∂V ∑ µ dN finitesimal statement is: i i i which shows (or defines) temperature T to be the partial derivative of U with respect to entropy S and pressure p to be the negative of the similar derivative with respect to dU = T dS + V σij dεij volume V where Einstein notation has been used for the tensors, in which there is a summation over all repeated indices in the product term. The Euler theorem yields for the internal energy:[13] T = ∂U , ∂S 1 U = T S + σij εij 2

∂U p = − ∂V ,

For a linearly elastic material, the stress is related to the and where the coefficients µi are the chemical potentials strain by: for the components of type i in the system. The chemical potentials are defined as the partial derivatives of the energy with respect to the variations in composition: σij = Cijkl εkl ( µi =

∂U ∂Ni

)

where the Cᵢ are the components of the 4th-rank elastic constant tensor of the medium. S,V,Nj̸=i

As conjugate variables to the composition {Nj } , the chemical potentials are intensive properties, intrinsically characteristic of the qualitative nature of the system, and not proportional to its extent. Because of the extensive nature of U and its independent variables, using Euler’s homogeneous function theorem, the differential dU may be integrated and yields an expression for the internal energy:

U = T S − pV +

∑ i

µ i Ni

9.3.7

History

James Joule studied the relationship between heat, work, and temperature. He observed that if he did mechanical work on a fluid, such as water, by agitating the fluid, its temperature increased. He proposed that the mechanical work he was doing on the system was converted to thermal energy. Specifically, he found that 4185.5 joules of energy were needed to raise the temperature of a kilogram of water by one degree Celsius.

9.3. INTERNAL ENERGY

9.3.8

Notes

[1] In this article we choose the sign convention of the mechanical work as typically defined in chemistry, which is different from the convention used in physics. In chemistry, work performed by the system against the environment, e.g., a system expansion, is negative, while in physics this is taken to be positive.

9.3.9

See also

• Calorimetry • Enthalpy • Exergy • Thermodynamic equations • Thermodynamic potentials

9.3.10

References

[1] Crawford, F. H. (1963), pp. 106–107. [2] Haase, R. (1971), pp. 24–28. [3] Born, M. (1949), Appendix 8, pp. 146–149. [4] Tschoegl, N.W. (2000), p. 17.

225 Bibliography of cited references • Adkins, C.J. (1968/1975). Equilibrium Thermodynamics, second edition, McGraw-Hill, London, ISBN 0-07-084057-1. • Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 0-88318-797-3. • Born, M. (1949). Natural Philosophy of Cause and Chance, Oxford University Press, London. • Callen, H.B. (1960/1985), Thermodynamics and an Introduction to Thermostatistics, (first edition 1960), second edition 1985, John Wiley & Sons, New York, ISBN 0-471-86256-8. • Crawford, F. H. (1963). Heat, Thermodynamics, and Statistical Physics, Rupert Hart-Davis, London, Harcourt, Brace & World, Inc. • Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081. • Münster, A. (1970), Classical Thermodynamics, translated by E.S. Halberstadt, Wiley–Interscience, London, ISBN 0-471-62430-6. • Tschoegl, N.W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics, Elsevier, Amsterdam, ISBN 0-444-50426-5.

[5] Callen, H.B. (1960/1985), Chapter 5. [6] Münster, A. (1970), p. 6. [7] Münster, A. (1970), Chapter 3. [8] Bailyn, M. (1994), pp. 206–209. [9] I. Klotz, R. Rosenberg, Chemical Thermodynamics - Basic Concepts and Methods, 7th ed., Wiley (2008), p.39 [10] Thermal energy – Hyperphysics [11] van Gool, W.; Bruggink, J.J.C. (Eds) (1985). Energy and time in the economic and physical sciences. North-Holland. pp. 41–56. ISBN 0444877487. [12] Grubbström, Robert W. (2007). “An Attempt to Introduce Dynamics Into Generalised Exergy Considerations”. Applied Energy 84: 701–718. doi:10.1016/j.apenergy.2007.01.003. [13] Landau & Lifshitz 1986

9.3.11

Bibliography

• Alberty, R. A. (2001). “Use of Legendre transforms in chemical thermodynamics” (PDF). Pure Appl. Chem. 73 (8): 1349–1380. doi:10.1351/pac200173081349. • Lewis, Gilbert Newton; Randall, Merle: Revised by Pitzer, Kenneth S. & Brewer, Leo (1961). Thermodynamics (2nd ed.). New York, NY USA: McGraw-Hill Book Co. ISBN 0-07-113809-9. • Landau, L. D.; Lifshitz, E. M. (1986). Theory of Elasticity (Course of Theoretical Physics Volume 7). (Translated from Russian by J.B. Sykes and W.H. Reid) (Third ed.). Boston, MA: Butterworth Heinemann. ISBN 0-7506-2633-X.

Chapter 10

Chapter 10. Equations 10.1 Ideal gas law

• V is the volume of the gas • n is the amount of substance of gas (in moles) • R is the ideal, or universal, gas constant, equal to the product of the Boltzmann constant and the Avogadro constant. • T is the temperature of the gas It can also be derived microscopically from kinetic theory, as was achieved (apparently independently) by August Krönig in 1856[2] and Rudolf Clausius in 1857.[3]

10.1.1

Equation

The state of an amount of gas is determined by its pressure, volume, and temperature. The modern form of the equation relates these simply in two main forms. The temperature used in the equation of state is an absolute temperature: in [4] Isotherms of an ideal gas. The curved lines represent the relation- the SI system of units, Kelvin. ship between pressure (on the vertical, y-axis) and volume (on the horizontal, x-axis) for an ideal gas at different temperatures: lines which are farther away from the origin (that is, lines that are nearer to the top right-hand corner of the diagram) represent higher temperatures.

Common form The most frequently introduced form is

The ideal gas law is the equation of state of a hypothetical ideal gas. It is a good approximation to the behavior of P V = nRT many gases under many conditions, although it has several limitations. It was first stated by Émile Clapeyron in 1834 as where: a combination of Boyle’s law, Charles’ law and Avogadro’s Law.[1] The ideal gas law is often written as: • P is the pressure of the gas • V is the volume of the gas

P V = nRT where:

• n is the amount of substance of gas (also known as number of moles)

• P is the pressure of the gas 226

10.1. IDEAL GAS LAW

227

• R is the ideal, or universal, gas constant, equal to the Statistical mechanics product of the Boltzmann constant and the Avogadro In statistical mechanics the following molecular equation is constant. derived from first principles: • T is the temperature of the gas In SI units, P is measured in pascals, V is measured in cubic metres, n is measured in moles, and T in Kelvin (The Kelvin scale is a shifted Celsius scale where 0.00 Kelvin = −273.15 degrees Celsius, the lowest possible temperature). R has the value 8.314 J·K−1 ·mol−1 or 0.08206 L·atm·mol−1 ·K−1 or ≈2 calories if using pressure in standard atmospheres (atm) instead of pascals, and volume in litres instead of cubic metres.

P V = N kB T where P is the absolute pressure of the gas measured in pascals; N is the number of molecules in the given volume V. The number density is given by the ratio N/V; kB is the Boltzmann constant relating temperature and energy; and T is the absolute temperature in Kelvin .

The number density contrasts to the other formulation, which uses n, the number of moles and V, the volume. This relation implies that R=NAkB where NA is Avogadro’s conMolar form stant, and the consistency of this result with experiment is How much gas is present could be specified by giving the a good check on the principles of statistical mechanics. mass instead of the chemical amount of gas. Therefore, an From this we can notice that for an average particle mass of alternative form of the ideal gas law may be useful. The μ times the atomic mass constant mᵤ (i.e., the mass is μ u) chemical amount (n) (in moles) is equal to the mass (m) (in grams) divided by the molar mass (M) (in grams per mole): m Y = µmu m n= and since ρ = mn, we find that the ideal gas law can be M By replacing n with m / M, and subsequently introducing rewritten as: density ρ = m/V, we get: m PV = RT M R P =ρ T M Defining the specific gas constant R ₑ ᵢfi as the ratio R/M,

P =

1 m k kT = ρT. V µmu µmu

In SI units, P is measured in pascals; V in cubic metres; Y is a dimensionless number; and T in Kelvin. k has the value 1.38·10−23 J·K−1 in SI units.

10.1.2 P = ρRspecific T This form of the ideal gas law is very useful because it links pressure, density, and temperature in a unique formula independent of the quantity of the considered gas. Alternatively, the law may be written in terms of the specific volume v, the reciprocal of density, as

Applications to thermodynamic processes

The table below essentially simplifies the ideal gas equation for a particular processes, thus making this equation easier to solve using numerical methods.

A thermodynamic process is defined as a system that moves from state 1 to state 2, where the state number is denoted by subscript. As shown in the first column of the table, baP v = Rspecific T sic thermodynamic processes are defined such that one of It is common, especially in engineering applications, to rep- the gas properties (P, V, T, or S) is constant throughout the resent the specific gas constant by the symbol R. In such process. cases, the universal gas constant is usually given a differ- For a given thermodynamics process, in order to specify the ent symbol such as R to distinguish it. In any case, the con- extent of a particular process, one of the properties ratios text and/or units of the gas constant should make it clear (which are listed under the column labeled “known ratio”) as to whether the universal or specific gas constant is being must be specified (either directly or indirectly). Also, the referred to.[5] property for which the ratio is known must be distinct from

228

CHAPTER 10. CHAPTER 10. EQUATIONS

the property held constant in the previous column (other- where C is a constant which is directly proportional to the wise the ratio would be unity, and not enough information amount of gas, n (Avogadro’s law). The proportionality facwould be available to simplify the gas law equation). tor is the universal gas constant, R, i.e. C = nR. In the final three columns, the properties (P, V, or T) at state Hence the ideal gas law 2 can be calculated from the properties at state 1 using the equations listed. ^ a. In an isentropic process, system entropy (S) is con- P V = nRT stant. Under these conditions, P 1 V 1 γ = P 2 V 2 γ , where γ is defined as the heat capacity ratio, which is constant for a calorifically perfect gas. The value used for γ is typically 1.4 for diatomic gases like nitrogen (N2 ) and oxygen (O2 ), (and air, which is 99% diatomic). Also γ is typically 1.6 for monatomic gases like the noble gases helium (He), and argon (Ar). In internal combustion engines γ varies between 1.35 and 1.15, depending on constitution gases and temperature.

10.1.3

Theoretical Kinetic theory Main article: Kinetic theory of gases

The ideal gas law can also be derived from first principles using the kinetic theory of gases, in which several simplifying assumptions are made, chief among which are that the molecules, or atoms, of the gas are point masses, possessing mass but no significant volume, and undergo only Deviations from ideal behavior of real elastic collisions with each other and the sides of the container in which both linear momentum and kinetic energy gases are conserved.

The equation of state given here applies only to an ideal gas, or as an approximation to a real gas that behaves sufficiently like an ideal gas. There are in fact many different forms of the equation of state. Since the ideal gas law neglects both molecular size and intermolecular attractions, it is most accurate for monatomic gases at high temperatures and low pressures. The neglect of molecular size becomes less important for lower densities, i.e. for larger volumes at lower pressures, because the average distance between adjacent molecules becomes much larger than the molecular size. The relative importance of intermolecular attractions diminishes with increasing thermal kinetic energy, i.e., with increasing temperatures. More detailed equations of state, such as the van der Waals equation, account for deviations from ideality caused by molecular size and intermolecular forces.

Statistical mechanics Main article: Statistical mechanics Let q = (qₓ, q , q ) and p = (pₓ, p , p ) denote the position vector and momentum vector of a particle of an ideal gas, respectively. Let F denote the net force on that particle. Then the time-averaged potential energy of the particle is:

⟨ dp ⟩ ⟨ dp ⟩ ⟨ dp ⟩ x y z ⟨q · F⟩ = qx + qy + qz dt dt dt ⟨ ∂H ⟩ ⟨ ∂H ⟩ ⟨ ∂H ⟩ = − qx − qy − qz = −3kB T, ∂qx ∂qy ∂qz

where the first equality is Newton’s second law, and the A residual property is defined as the difference between a second line uses Hamilton’s equations and the equipartition real gas property and an ideal gas property, both considered theorem. Summing over a system of N particles yields at the same pressure, temperature, and composition.

10.1.4

Derivations

3N kB T = −

⟨∑ N

⟩ qk · Fk .

k=1

Empirical

By Newton’s third law and the ideal gas assumption, the net force of the system is the force applied by the walls of the The ideal gas law can be derived from combining two em- container, and this force is given by the pressure P of the pirical gas laws: the combined gas law and Avogadro’s law. gas. Hence The combined gas law states that

PV =C T



⟨∑ N k=1

⟩ qk · F k

I q · dS,

=P surface

10.1. IDEAL GAS LAW

229

where dS is the infinitesimal area element along the walls of the container. Since the divergence of the position vector q is

∇·q=

∂qx ∂qy ∂qz + + = 3, ∂qx ∂qy ∂qz

the divergence theorem implies that ∫

I q · dS = P

P surface

(∇ · q) dV = 3P V, volume

where dV is an infinitesimal volume within the container and V is the total volume of the container. Putting these equalities together yields ⟨∑ ⟩ N 3N kB T = − qk · Fk = 3P V, k=1

which immediately implies the ideal gas law for N particles:

P V = N kB T = nRT, where n = N/NA is the number of moles of gas and R = NAkB is the gas constant.

[3] Clausius, R. (1857). “Ueber die Art der Bewegung, welche wir Wärme nennen”. Annalen der Physik und Chemie (in German) 176 (3): 353–79. Bibcode:1857AnP...176..353C. doi:10.1002/andp.18571760302. Facsimile at the Bibliothèque nationale de France (pp. 353–79). [4] “Equation of State”. [5] Moran and Shapiro, Fundamentals of Engineering Thermodynamics, Wiley, 4th Ed, 2000

10.1.7

Further reading

• Davis and Masten Principles of Environmental Engineering and Science, McGraw-Hill Companies, Inc. New York (2002) ISBN 0-07-235053-9 • Website giving credit to Benoît Paul Émile Clapeyron, (1799–1864) in 1834

10.1.8

External links

• Configuration integral (statistical mechanics) where an alternative statistical mechanics derivation of the ideal-gas law, using the relationship between the Helmholtz free energy and the partition function, but without using the equipartition theorem, is provided. Vu-Quoc, L., Configuration integral (statistical mechanics), 2008. this wiki site is down; see this article in the web archive on 2012 April 28. • Online Ideal Gas law Calculator

10.1.5

See also

• Van der Waals equation • Boltzmann constant • Configuration integral • Dynamic pressure • Internal energy

10.1.6

References

[1] Clapeyron, E (1834). “Mémoire sur la puissance motrice de la chaleur”. Journal de l'École Polytechnique (in French) XIV: 153–90. Facsimile at the Bibliothèque nationale de France (pp. 153–90). [2] Krönig, A. (1856). “Grundzüge einer Theorie der Gase”. Annalen der Physik und Chemie (in German) 99 (10): 315–22. Bibcode:1856AnP...175..315K. doi:10.1002/andp.18561751008. Facsimile at the Bibliothèque nationale de France (pp. 315–22).

Chapter 11

Chapter 11. Fundamentals 11.1 Fundamental thermodynamic relation In thermodynamics, the fundamental thermodynamic relation is generally expressed as an infinitesimal change in internal energy in terms of infinitesimal changes in entropy, and volume for a closed system in thermal equilibrium in the following way. dU = T dS − P dV

dU = δQ − δW where δQ and δW are infinitesimal amounts of heat supplied to the system by its surroundings and work done by the system on its surroundings, respectively. According to the second law of thermodynamics we have for a reversible process:

dS =

δQ T

Here, U is internal energy, T is absolute temperature, S is entropy, P is pressure, and V is volume. This relation Hence: applies to a reversible change, or to a change in a closed system of uniform temperature and pressure at constant composition.[1] δQ = T dS This is only one expression of the fundamental thermody- By substituting this into the first law, we have: namic relation. It may be expressed in other ways, using different variables (e.g. using thermodynamic potentials). For example, the fundamental relation may be expressed in dU = T dS − δW terms of the enthalpy as

dH = T dS + V dP

Letting δW be reversible pressure-volume work done by the system on its surroundings,

in terms of the Helmholtz free energy (F) as δW = P dV dF = −S dT − P dV

we have:

and in terms of the Gibbs free energy (G) as dU = T dS − P dV dG = −S dT + V dP

This equation has been derived in the case of reversible changes. However, since U, S, and V are thermodynamic state functions, the above relation holds also for non11.1.1 Derivation from the first and second reversible changes in a system of uniform pressure and temperature at constant composition.[1] If the composition, i.e. laws of thermodynamics the amounts ni of the chemical components, in a system of The first law of thermodynamics states that: uniform temperature and pressure can also change, e.g. due 230

11.1. FUNDAMENTAL THERMODYNAMIC RELATION

231

to a chemical reaction, the fundamental thermodynamic re- The fundamental assumption of statistical mechanics is that lation generalizes to: all the Ω (E) states are equally likely. This allows us to extract all the thermodynamical quantities of interest. The temperature is defined as: ∑ dU = T dS − P dV + µi dni d log[Ω(E)] 1 kT ≡ β ≡ dE i The µj are the chemical potentials corresponding to parti- This definition can be derived from the microcanonical encles of type j . The last term must be zero for a reversible semble, which is a system of a constant number of particles, a constant volume and that does not exchange energy with process. its environment. Suppose that the system has some external If the system has more external parameters than just the parameter, x, that can be changed. In general, the energy volume that can change, the fundamental thermodynamic eigenstates of the system will depend on x. According to relation generalizes to the adiabatic theorem of quantum mechanics, in the limit of an infinitely slow change of the system’s Hamiltonian, ∑ ∑ the system will stay in the same energy eigenstate and thus dU = T dS − Xj dxj + µi dni change its energy according to the change in energy of the j i energy eigenstate it is in. Here the Xi are the generalized forces corresponding to the The generalized force, X, corresponding to the external paexternal parameters xi . rameter x is defined such that Xdx is the work performed by the system if x is increased by an amount dx. E.g., if x 11.1.2 Derivation from statistical mechani- is the volume, then X is the pressure. The generalized force for a system known to be in energy eigenstate Er is given cal principles by: The above derivation uses the first and second laws of thermodynamics. The first law of thermodynamics is essentially dE a definition of heat, i.e. heat is the change in the internal X = − r dx energy of a system that is not caused by a change of the external parameters of the system. Since the system can be in any energy eigenstate within an However, the second law of thermodynamics is not a defin- interval of δE , we define the generalized force for the sysing relation for the entropy. The fundamental definition of tem as the expectation value of the above expression: entropy of an isolated system containing an amount of energy of E is:

⟨ X=−

S = k log [Ω (E)] where Ω (E) is the number of quantum states in a small interval between E and E + δE . Here δE is a macroscopically small energy interval that is kept fixed. Strictly speaking this means that the entropy depends on the choice of δE . However, in the thermodynamic limit (i.e. in the limit of infinitely large system size), the specific entropy (entropy per unit volume or per unit mass) does not depend on δE . The entropy is thus a measure of the uncertainty about exactly which quantum state the system is in, given that we know its energy to be in some interval of size δE .

dEr dx



To evaluate the average, we partition the Ω (E) energy eigenstates by counting how many of them have a value for dEr dx within a range between Y and Y + δY . Calling this number ΩY (E) , we have:

Ω (E) =



ΩY (E)

Y

The average defining the generalized force can now be written:

Deriving the fundamental thermodynamic relation from first principles thus amounts to proving that the above def1 ∑ Y ΩY (E) inition of entropy implies that for reversible processes we X = − Ω (E) Y have: δQ dS = T

We can relate this to the derivative of the entropy with respect to x at constant energy E as follows. Suppose we change x to x + dx. Then Ω (E) will change because the

232

CHAPTER 11. CHAPTER 11. FUNDAMENTALS

energy eigenstates depend on x, causing energy eigenstates ( ) ( ) to move into or out of the range between E and E + δE ∂S ∂S dE X r . Let’s focus again on the energy eigenstates for which dE dS = dE + dx = + dx dx ∂E ∂x T T x E lies within the range between Y and Y + δY . Since these energy eigenstates increase in energy by Y dx, all such enwhich we can write as: ergy eigenstates that are in the interval ranging from E - Y dx to E move from below E to above E. There are dE = T dS − Xdx ΩY (E) Y dx NY (E) = δE such energy eigenstates. If Y dx ≤ δE , all these energy eigenstates will move into the range between E and E +δE and contribute to an increase in Ω . The number of energy eigenstates that move from below E + δE to above E + δE is, of course, given by NY (E + δE) . The difference

11.1.3

[1] Schmidt-Rohr, K. (2014). “Expansion Work without the External Pressure, and Thermodynamics in Terms of Quasistatic Irreversible Processes” J. Chem. Educ. 91: 402-409. http://dx.doi.org/10.1021/ed3008704

11.1.4

NY (E) − NY (E + δE) is thus the net contribution to the increase in Ω . Note that if Y dx is larger than δE there will be energy eigenstates that move from below E to above E + δE . They are counted in both NY (E) and NY (E + δE) , therefore the above expression is also valid in that case. Expressing the above expression as a derivative with respect to E and summing over Y yields the expression:

References

External links

• The Fundamental Thermodynamic Relation • Thermodynamics and Heat Transfer

11.2

Heat engine

See also: Thermodynamic cycle (

∂Ω ∂x

) =− E



( Y

Y

∂ΩY ∂E

)

( =

x

∂ (ΩX) ∂E

) x

The logarithmic derivative of Ω with respect to x is thus given by: (

∂ log (Ω) ∂x

(

) = βX + E

∂X ∂E

) x

The first term is intensive, i.e. it does not scale with system size. In contrast, the last term scales as the inverse system size and thus vanishes in the thermodynamic limit. We have thus found that: (

∂S ∂x

) = E

X T

Combining this with (

∂S ∂E

)

Gives:

= x

1 T

In thermodynamics, a heat engine is a system that converts heat or thermal energy—and chemical energy— to mechanical energy, which can then be used to do mechanical work.[1][2] It does this by bringing a working substance from a higher state temperature to a lower state temperature. A heat “source” generates thermal energy that brings the working substance to the high temperature state. The working substance generates work in the "working body" of the engine while transferring heat to the colder "sink" until it reaches a low temperature state. During this process some of the thermal energy is converted into work by exploiting the properties of the working substance. The working substance can be any system with a non-zero heat capacity, but it usually is a gas or liquid. During this process, a lot of heat is lost to the surroundings, i.e. it cannot be used. In general an engine converts energy to mechanical work. Heat engines distinguish themselves from other types of engines by the fact that their efficiency is fundamentally limited by Carnot’s theorem.[3] Although this efficiency limitation can be a drawback, an advantage of heat engines is that most forms of energy can be easily converted to heat by processes like exothermic reactions (such as combustion), absorption of light or energetic particles, friction,

11.2. HEAT ENGINE dissipation and resistance. Since the heat source that supplies thermal energy to the engine can thus be powered by virtually any kind of energy, heat engines are very versatile and have a wide range of applicability.

233 (which no engine ever attains) is equal to the temperature difference between the hot and cold ends divided by the temperature at the hot end, all expressed in absolute temperature or kelvins.

Heat engines are often confused with the cycles they attempt The efficiency of various heat engines proposed or used toto implement. Typically, the term “engine” is used for a day has a large range: physical device and “cycle” for the model.

11.2.1

Overview

• 3 percent[4] (97 percent waste heat using low quality heat) for the OTEC ocean power proposal. • 25 percent for most automotive gasoline engines [5] • 49 percent for a supercritical coal-fired power station such as the Avedøre Power Station, and many others • 60 percent for a steam-cooled combined cycle gas turbine.[6] All these processes gain their efficiency (or lack thereof) from the temperature drop across them. Significant energy may be used for auxiliary equipment, such as pumps, which effectively reduces efficiency. Power

Heat engines can be characterized by their specific power, which is typically given in kilowatts per litre of engine displacement (in the U.S. also horsepower per cubic inch). The result offers an approximation of the peak power output of an engine. This is not to be confused with fuel efficiency, since high efficiency often requires a lean fuel-air ratio, and Figure 1: Heat engine diagram thus lower power density. A modern high-performance car 3 In thermodynamics, heat engines are often modeled using engine makes in excess of 75 kW/l (1.65 hp/in ). a standard engineering model such as the Otto cycle. The theoretical model can be refined and augmented with actual data from an operating engine, using tools such as an 11.2.2 Everyday examples indicator diagram. Since very few actual implementations of heat engines exactly match their underlying thermody- Examples of everyday heat engines include the steam ennamic cycles, one could say that a thermodynamic cycle is gine (for example most of the world’s power plants use an ideal case of a mechanical engine. In any case, fully steam turbines, a modern form of steam engine), and the understanding an engine and its efficiency requires gaining internal combustion engine , gasoline (petrol) engine the a good understanding of the (possibly simplified or ideal- diesel engine, in an automobile or truck. A common toy that ized) theoretical model, the practical nuances of an actual is also a heat engine is a drinking bird. Also the stirling enmechanical engine, and the discrepancies between the two. gine is a heat engine. All of these familiar heat engines are powered by the expansion of heated gases. The general surIn general terms, the larger the difference in temperature roundings are the heat sink, which provides relatively cool between the hot source and the cold sink, the larger is the gases that, when heated, expand rapidly to drive the mepotential thermal efficiency of the cycle. On Earth, the cold chanical motion of the engine. side of any heat engine is limited to being close to the ambient temperature of the environment, or not much lower than 300 Kelvin, so most efforts to improve the thermo- 11.2.3 Examples of heat engines dynamic efficiencies of various heat engines focus on increasing the temperature of the source, within material lim- It is important to note that although some cycles have a typits. The maximum theoretical efficiency of a heat engine ical combustion location (internal or external), they often

234 can be implemented with the other. For example, John Ericsson developed an external heated engine running on a cycle very much like the earlier Diesel cycle. In addition, externally heated engines can often be implemented in open or closed cycles. Earth’s heat engine Earth’s atmosphere and hydrosphere—Earth’s heat engine—are coupled processes that constantly even out solar heating imbalances through evaporation of surface water, convection, rainfall, winds, and ocean circulation, when distributing heat around the globe.[7]

CHAPTER 11. CHAPTER 11. FUNDAMENTALS • Stirling cycle (Stirling engine, thermoacoustic devices) • Internal combustion engine (ICE): • Otto cycle (e.g. Gasoline/Petrol engine) • Diesel cycle (e.g. Diesel engine) • Atkinson cycle (Atkinson engine) • Brayton cycle or Joule cycle originally Ericsson cycle (gas turbine) • Lenoir cycle (e.g., pulse jet engine) • Miller cycle (Miller engine)

Liquid only cycle The Hadley system provides an example of a heat engine. The Hadley circulation is identified with rising of warm and In these cycles and engines the working fluid are always like moist air in the equatorial region with descent of colder air liquid: in the subtropics corresponding to a thermally driven direct circulation, with consequent net production of kinetic • Stirling cycle (Malone engine) energy.[8] • Heat Regenerative Cyclone[9] Phase-change cycles In these cycles and engines, the working fluids are gases and liquids. The engine converts the working fluid from a gas to a liquid, from liquid to gas, or both, generating work from the fluid expansion or compression. • Rankine cycle (classical steam engine) • Regenerative cycle (steam engine more efficient than Rankine cycle)

Electron cycles • Johnson thermoelectric energy converter • Thermoelectric (Peltier–Seebeck effect) • Thermogalvanic cell • Thermionic emission • Thermotunnel cooling

• Organic Rankine cycle (Coolant changing phase in Magnetic cycles temperature ranges of ice and hot liquid water) • Vapor to liquid cycle (Drinking bird, Injector, Minto wheel)

• Thermo-magnetic motor (Tesla)

• Liquid to solid cycle (Frost heaving — water changing Cycles used for refrigeration from ice to liquid and back again can lift rock up to 60 Main article: refrigeration cm.) • Solid to gas cycle (Dry ice cannon — Dry ice sublimes A domestic refrigerator is an example of a heat pump: a heat to gas.) engine in reverse. Work is used to create a heat differential. Many cycles can run in reverse to move heat from the cold side to the hot side, making the cold side cooler and the hot Gas-only cycles side hotter. Internal combustion engine versions of these In these cycles and engines the working fluid is always a gas cycles are, by their nature, not reversible. (i.e., there is no phase change):

Refrigeration cycles include:

• Carnot cycle (Carnot heat engine)

• Vapor-compression refrigeration

• Ericsson cycle (Caloric Ship John Ericsson)

• Stirling cryocoolers

11.2. HEAT ENGINE • Gas-absorption refrigerator • Air cycle machine

235 work and delivering the rest to the cold temperature heat sink.

• Magnetic refrigeration

In general, the efficiency of a given heat transfer process (whether it be a refrigerator, a heat pump or an engine) is defined informally by the ratio of “what you get out” to “what you put in”.

Evaporative heat engines

In the case of an engine, one desires to extract work and puts in a heat transfer.

• Vuilleumier refrigeration

The Barton evaporation engine is a heat engine based on a cycle producing power and cooled moist air from the evaporation of water into hot dry air. Mesoscopic heat engines Mesoscopic heat engines are nanoscale devices that may serve the goal of processing heat fluxes and perform useful work at small scales. Potential applications include e.g. electric cooling devices. In such mesoscopic heat engines, work per cycle of operation fluctuates due to thermal noise. There is exact equality that relates average of exponents of work performed by any heat engine and the heat transfer from the hotter heat bath.[10] This relation transforms the Carnot’s inequality into exact equality.

11.2.4

Efficiency

η=

−Qh − Qc Qc −W = =1− −Qh −Qh −Qh

The theoretical maximum efficiency of any heat engine depends only on the temperatures it operates between. This efficiency is usually derived using an ideal imaginary heat engine such as the Carnot heat engine, although other engines using different cycles can also attain maximum efficiency. Mathematically, this is because in reversible processes, the change in entropy of the cold reservoir is the negative of that of the hot reservoir (i.e., ∆Sc = −∆Sh ), keeping the overall change of entropy zero. Thus:

ηmax = 1 −

Tc ∆Sc Tc =1− −Th ∆Sh Th

The efficiency of a heat engine relates how much useful where Th is the absolute temperature of the hot source and work is output for a given amount of heat energy input. Tc that of the cold sink, usually measured in kelvin. Note From the laws of thermodynamics, after a completed cycle: that dSc is positive while dSh is negative; in any reversible work-extracting process, entropy is overall not increased, but rather is moved from a hot (high-entropy) system to a cold (low-entropy one), decreasing the entropy of the heat source and increasing that of the heat sink. W = Qc − (−Qh ) where H W = − P dV is the work extracted from the engine. (It is negative since work is done by the engine.) Qh = Th ∆Sh is the heat energy taken from the high temperature system. (It is negative since heat is extracted from the source, hence (−Qh ) is positive.) Qc = Tc ∆Sc is the heat energy delivered to the cold temperature system. (It is positive since heat is added to the sink.)

The reasoning behind this being the maximal efficiency goes as follows. It is first assumed that if a more efficient heat engine than a Carnot engine is possible, then it could be driven in reverse as a heat pump. Mathematical analysis can be used to show that this assumed combination would result in a net decrease in entropy. Since, by the second law of thermodynamics, this is statistically improbable to the point of exclusion, the Carnot efficiency is a theoretical upper bound on the reliable efficiency of any process. Empirically, no heat engine has ever been shown to run at a greater efficiency than a Carnot cycle heat engine.

Figure 2 and Figure 3 show variations on Carnot cycle efficiency. Figure 2 indicates how efficiency changes with an increase in the heat addition temperature for a constant compressor inlet temperature. Figure 3 indicates how the In other words, a heat engine absorbs heat energy from the efficiency changes with an increase in the heat rejection high temperature heat source, converting part of it to useful temperature for a constant turbine inlet temperature.

236

CHAPTER 11. CHAPTER 11. FUNDAMENTALS

Endoreversible heat engines The most Carnot efficiency as a criterion of heat engine performance is the fact that by its nature, any maximally efficient Carnot cycle must operate at an infinitesimal temperature gradient. This is because any transfer of heat between two bodies at differing temperatures is irreversible, and therefore the Carnot efficiency expression only applies in the infinitesimal limit. The major problem with that is that the object of most heat engines is to output some sort of power, and infinitesimal power is usually not what is being sought. A different measure of ideal heat engine efficiency is given by considerations of endoreversible thermodynamics, where the cycle is identical to the Carnot cycle except in that the two processes of heat transfer are not reversible (Callen 1985): η = 1−



Tc Th

(Note: Units K or °R)

This model does a better job of predicting how well real-world heat engines can do (Callen 1985, see also endoreversible thermodynamics): As shown, the endoreversible efficiency much more closely models the observed data.

11.2.5

History

Main article: Timeline of heat engine technology See also: History of the internal combustion engine and History of thermodynamics Heat engines have been known since antiquity but were only made into useful devices at the time of the industrial revolution in the 18th century. They continue to be developed today.

11.2.6

Heat engine enhancements

Engineers have studied the various heat engine cycles extensively in effort to improve the amount of usable work they could extract from a given power source. The Carnot cycle limit cannot be reached with any gas-based cycle, but engineers have worked out at least two ways to possibly go around that limit, and one way to get better efficiency without bending any rules.

combined-cycle gas turbines. Unfortunately, physical limits (such as the melting point of the materials used to build the engine) and environmental concerns regarding NOₓ production restrict the maximum temperature on workable heat engines. Modern gas turbines run at temperatures as high as possible within the range of temperatures necessary to maintain acceptable NOₓ output . Another way of increasing efficiency is to lower the output temperature. One new method of doing so is to use mixed chemical working fluids, and then exploit the changing behavior of the mixtures. One of the most famous is the so-called Kalina cycle, which uses a 70/30 mix of ammonia and water as its working fluid. This mixture allows the cycle to generate useful power at considerably lower temperatures than most other processes. 2. Exploit the physical properties of the working fluid. The most common such exploitation is the use of water above the so-called critical point, or so-called supercritical steam. The behavior of fluids above their critical point changes radically, and with materials such as water and carbon dioxide it is possible to exploit those changes in behavior to extract greater thermodynamic efficiency from the heat engine, even if it is using a fairly conventional Brayton or Rankine cycle. A newer and very promising material for such applications is CO2 . SO2 and xenon have also been considered for such applications, although SO2 is a little toxic for most. 3. Exploit the chemical properties of the working fluid. A fairly new and novel exploit is to use exotic working fluids with advantageous chemical properties. One such is nitrogen dioxide (NO2 ), a toxic component of smog, which has a natural dimer as di-nitrogen tetraoxide (N2 O4 ). At low temperature, the N2 O4 is compressed and then heated. The increasing temperature causes each N2 O4 to break apart into two NO2 molecules. This lowers the molecular weight of the working fluid, which drastically increases the efficiency of the cycle. Once the NO2 has expanded through the turbine, it is cooled by the heat sink, which makes it recombine into N2 O4 . This is then fed back by the compressor for another cycle. Such species as aluminium bromide (Al2 Br6 ), NOCl, and Ga2 I6 have all been investigated for such uses. To date, their drawbacks have not warranted their use, despite the efficiency gains that can be realized.[12]

1. Increase the temperature difference in the heat engine. 11.2.7 Heat engine processes The simplest way to do this is to increase the hot side temperature, which is the approach used in modern Each process is one of the following:

11.3. THERMODYNAMIC CYCLE • isothermal (at constant temperature, maintained with heat added or removed from a heat source or sink) • isobaric (at constant pressure) • isometric/isochoric (at constant volume), also referred to as iso-volumetric • adiabatic (no heat is added or removed from the system during adiabatic process) • isentropic (reversible adiabatic process, no heat is added or removed during isentropic process)

11.2.8

See also

• Heat pump • Reciprocating engine for a general description of the mechanics of piston engines • Thermosynthesis • Timeline of heat engine technology

11.2.9

References

[1] Fundamentals of Classical Thermodynamics, 3rd ed. p. 159, (1985) by G. J. Van Wylen and R. E. Sonntag: “A heat engine may be defined as a device that operates in a thermodynamic cycle and does a certain amount of net positive work as a result of heat transfer from a high-temperature body and to a low-temperature body. Often the term heat engine is used in a broader sense to include all devices that produce work, either through heat transfer or combustion, even though the device does not operate in a thermodynamic cycle. The internal-combustion engine and the gas turbine are examples of such devices, and calling these heat engines is an acceptable use of the term.” [2] Mechanical efficiency of heat engines, p. 1 (2007) by James R. Senf: “Heat engines are made to provide mechanical energy from thermal energy.” [3] Thermal physics: entropy and free energies, by Joon Chang Lee (2002), Appendix A, p. 183: “A heat engine absorbs energy from a heat source and then converts it into work for us.... When the engine absorbs heat energy, the absorbed heat energy comes with entropy.” (heat energy ∆Q = T ∆S ), “When the engine performs work, on the other hand, no entropy leaves the engine. This is problematic. We would like the engine to repeat the process again and again to provide us with a steady work source. ... to do so, the working substance inside the engine must return to its initial thermodynamic condition after a cycle, which requires to remove the remaining entropy. The engine can do this only in one way. It must let part of the absorbed heat energy leave without converting it into work. Therefore the engine cannot convert all of the input energy into work!"

237

[4] M. Emam, Experimental Investigations on a Standing-Wave Thermoacoustic Engine, M.Sc. Thesis, Cairo University, Egypt (2013). [5] Where the Energy Goes: Gasoline Vehicles, US Dept of Energy [6] “Efficiency by the Numbers” by Lee S. Langston [7] Lindsey, Rebecca (2009). “Climate and Earth’s Energy Budget”. NASA Earth Observatory. [8] Junling Huang and Michael B. McElroy (2014). “Contributions of the Hadley and Ferrel Circulations to the Energetics of the Atmosphere over the Past 32 Years”. Journal of Climate 27 (7): 2656–2666. Bibcode:2014JCli...27.2656H. doi:10.1175/jcli-d-1300538.1. [9] “Cyclone Power Technologies Website”. clonepower.com. Retrieved 2012-03-22.

Cy-

[10] N. A. Sinitsyn (2011). “Fluctuation Relation for Heat Engines”. J. Phys. A: Math. Theor. 44: 405001. arXiv:1111.7014. Bibcode:2011JPhA...44N5001S. doi:10.1088/1751-8113/44/40/405001. [11] F. L. Curzon, B. Ahlborn (1975). “Efficiency of a Carnot Engine at Maximum Power Output”. Am. J. Phys., Vol. 43, pp. 24. [12] “Nuclear Reactors Concepts and Thermodynamic Cycles” (PDF). Retrieved 2012-03-22.

• Kroemer, Herbert; Kittel, Charles (1980). Thermal Physics (2nd ed.). W. H. Freeman Company. ISBN 0-7167-1088-9. • Callen, Herbert B. (1985). Thermodynamics and an Introduction to Thermostatistics (2nd ed.). John Wiley & Sons, Inc. ISBN 0-471-86256-8.

11.3

Thermodynamic cycle

A thermodynamic cycle consists of a linked sequence of thermodynamic processes that involve transfer of heat and work into and out of the system, while varying pressure, temperature, and other state variables within the system, and that eventually returns the system to its initial state.[1] In the process of passing through a cycle, the working fluid (system) may convert heat from a warm source into useful work, and dispose of the remaining heat to a cold sink, thereby acting as a heat engine. Conversely, the cycle may be reversed and use work to move heat from a cold source and transfer it to a warm sink thereby acting as a heat pump. During a closed cycle, the system returns to its original thermodynamic state of temperature and pressure. Process

238

CHAPTER 11. CHAPTER 11. FUNDAMENTALS

quantities (or path quantities), such as heat and work are process dependent. For a cycle for which the system returns to its initial state the first law of thermodynamics applies:

∆E = Eout − Ein = 0 The above states that there is no change of the energy of the system over the cycle. Eᵢ might be the work and heat input during the cycle and Eₒᵤ would be the work and heat output during the cycle. The first law of thermodynamics also dictates that the net heat input is equal to the net work output over a cycle (we account for heat, Qᵢ , as positive and Qₒᵤ as negative). The repeating nature of the process path allows for continuous operation, making the cycle an important concept in thermodynamics. Thermodynamic cycles are often represented mathematically as quasistatic processes in the modeling of the workings of an actual device.

11.3.1

Heat and work

Two primary classes of thermodynamic cycles are power cycles and heat pump cycles. Power cycles are cycles which convert some heat input into a mechanical work output, while heat pump cycles transfer heat from low to high temperatures by using mechanical work as the input. Cycles composed entirely of quasistatic processes can operate as power or heat pump cycles by controlling the process direction. On a pressure-volume (PV) diagram or temperatureentropy diagram, the clockwise and counterclockwise directions indicate power and heat pump cycles, respectively.

The net work equals the area inside because it is (a) the Riemann sum of work done on the substance due to expansion, minus (b) the work done to re-compress.

course of the cyclic process, when the cyclic process finishes the system’s energy is the same as the energy it had when the process began. If the cyclic process moves clockwise around the loop, then W will be positive, and it represents a heat engine. If it moves counterclockwise, then W will be negative, and it represents a heat pump. Each Point in the Cycle

Relationship to work

Otto Cycle:

Because the net variation in state properties during a thermodynamic cycle is zero, it forms a closed loop on a PV diagram. A PV diagram’s Y axis shows pressure (P) and X axis shows volume (V). The area enclosed by the loop is the work (W) done by the process:

1→2: Isentropic Expansion: Constant entropy (s), Decrease in pressure (P), Increase in volume (v), Decrease in temperature (T)

I (1)

W =

P dV

2→3: Isochoric Cooling: Constant volume(v), Decrease in pressure (P), Decrease in entropy (S), Decrease in temperature (T) 3→4: Isentropic Compression: Constant entropy (s), Increase in pressure (P), Decrease in volume (v), Increase in temperature (T)

This work is equal to the balance of heat (Q) transferred 4→1: Isochoric Heating: Constant volume (v), Increase in into the system: pressure (P), Increase in entropy (S), Increase in temperature (T) (2)

W = Q = Qin − Qout

A List of Thermodynamic Processes:

Adiabatic : No energy transfer as heat (Q) during that part Equation (2) makes a cyclic process similar to an isothermal of the cycle would amount to δQ=0. This does not exclude process: even though the internal energy changes during the energy transfer as work.

11.3. THERMODYNAMIC CYCLE

239 power and run the vast majority of motor vehicles. Power cycles can be organized into two categories: real cycles and ideal cycles. Cycles encountered in real world devices (real cycles) are difficult to analyze because of the presence of complicating effects (friction), and the absence of sufficient time for the establishment of equilibrium conditions. For the purpose of analysis and design, idealized models (ideal cycles) are created; these ideal models allow engineers to study the effects of major parameters that dominate the cycle without having to spend significant time working out intricate details present in the real cycle model.

Description of each point in the thermodynamic cycles.

Power cycles can also be divided according to the type of heat engine they seek to model. The most common cycles used to model internal combustion engines are the Otto cycle, which models gasoline engines, and the Diesel cycle, which models diesel engines. Cycles that model external combustion engines include the Brayton cycle, which models gas turbines, the Rankine cycle, which models steam turbines, the Stirling cycle, which models hot air engines, and the Ericsson cycle, which also models hot air engines.

Isothermal : The process is at a constant temperature during that part of the cycle (T=constant, δT=0). This does not exclude energy transfer as heat or work. Isobaric : Pressure in that part of the cycle will remain constant. (P=constant, δP=0). This does not exclude energy transfer as heat or work. Isochoric : The process is constant volume (V=constant, δV=0). This does not exclude energy transfer as heat or work. Isentropic : The process is one of constant entropy (S=constant, δS=0). This excludes the transfer of heat but not work. Power cycles

The clockwise thermodynamic cycle indicated by the arrows shows that the cycle represents a heat engine. The cycle consists of four states (the point shown by crosses) and four thermodynamic processes (lines).

Heat engine diagram.

Main article: Heat engine

For example, the pressure-volume mechanical work output from the heat engine cycle (net work out), consisting of 4 thermodynamic processes, is:

(3)

Wnet = W1→2 + W2→3 + W3→4 + W4→1

Thermodynamic power cycles are the basis for the operation W1→2 = of heat engines, which supply most of the world’s electric



V2

P dV, system on done work negative, V1

240

CHAPTER 11. CHAPTER 11. FUNDAMENTALS ∫

V3

P dV, V3 equal V2 if work zero

W2→3 = V2



V4

images illustrate the differences in work output predicted by an ideal Stirling cycle and the actual performance of a Stirling engine:

P dV, system by done work positive,

W3→4 =

As the net work output for a cycle is represented by the interior of the cycle, there is a significant difference between ∫ V1 the predicted work output of the ideal cycle and the actual P dV, V1 equal V4 if work zero W4→1 = work output shown by a real engine. It may also be observed V4 that the real individual processes diverge from their idealIf no volume change happens in process 4-1 and 2-3, equaized counterparts; e.g., isochoric expansion (process 1-2) tion (3) simplifies to: occurs with some actual volume change. V3

(4)

Wnet = W1→2 + W3→4

Heat pump cycles Main article: Heat pump and refrigeration cycle

11.3.3

Well-known thermodynamic cycles

In practice, simple idealized thermodynamic cycles are usually made out of four thermodynamic processes. Any thermodynamic processes may be used. However, when idealized cycles are modeled, often processes where one state variable is kept constant are used, such as an isothermal process (constant temperature), isobaric process (constant pressure), isochoric process (constant volume), isentropic process (constant entropy), or an isenthalpic process (constant enthalpy). Often adiabatic processes are also used, where no heat is exchanged.

Thermodynamic heat pump cycles are the models for household heat pumps and refrigerators. There is no difference between the two except the purpose of the refrigerator is to cool a very small space while the household heat pump is intended to warm a house. Both work by moving heat from a cold space to a warm space. The most common refrigeration cycle is the vapor compression cycle, which Some example thermodynamic cycles and their constituent models systems using refrigerants that change phase. The processes are as follows: absorption refrigeration cycle is an alternative that absorbs the refrigerant in a liquid solution rather than evaporating it. Gas refrigeration cycles include the reversed Brayton cycle Ideal cycle and the Hampson-Linde cycle. Multiple compression and expansion cycles allow gas refrigeration systems to liquify gases. p

11.3.2

Modelling real systems

1 2 Thermodynamic cycles may be used to model real devices and systems, typically by making a series of assumptions.[2] A simplifying assumptions are often necessary to reduce the [2] problem to a more manageable form. For example, as D B shown in the figure, devices such a gas turbine or jet engine can be modeled as a Brayton cycle. The actual device C is made up of a series of stages, each of which is itself mod4 3 eled as an idealized thermodynamic process. Although each stage which acts on the working fluid is a complex real device, they may be modelled as idealized processes which approximate their real behavior. If energy is added by means other than combustion, then a further assumption is that the v exhaust gases would be passed from the exhaust to a heat exchanger that would sink the waste heat to the environment and the working gas would be reused at the inlet stage. An illustration of an ideal cycle heat engine (arrows clockwise). The difference between an idealized cycle and actual performance may be significant.[2] For example, the following An ideal cycle is constructed out of:

11.3. THERMODYNAMIC CYCLE

241

1. TOP and BOTTOM of the loop: a pair of parallel iso- A Stirling cycle is like an Otto cycle, except that the adibaric processes abats are replaced by isotherms. It is also the same as an Ericsson cycle with the isobaric processes substituted for 2. LEFT and RIGHT of the loop: a pair of parallel iso- constant volume processes. choric processes 1. TOP and BOTTOM of the loop: a pair of quasiparallel isothermal processes

Internal energy of a perfect gas undergoing different portions of a cycle: Isothermal: ∆U = RT ln VV21 − RT ln VV12 00) equal to has process isothermal an of U (Note: Isochoric: ∆U = Cv ∆T − 0 = Cv ∆T Isobaric: ∆U = Cp ∆T − R∆T ( or P ∆V ) = Cv ∆T Carnot cycle Main article: Carnot cycle The Carnot cycle is a cycle composed of the totally reversible processes of isentropic compression and expansion and isothermal heat addition and rejection. The thermal efficiency of a Carnot cycle depends only on the absolute temperatures of the two reservoirs in which heat transfer takes place, and for a power cycle is:

η =1−

TL TH

2. LEFT and RIGHT sides of the loop: a pair of parallel isochoric processes

=

Heat flows into the loop through the top isotherm and the left isochore, and some of this heat flows back out through the bottom isotherm and the right isochore, but most of the heat flow is through the pair of isotherms. This makes sense since all the work done by the cycle is done by the pair of isothermal processes, which are described by Q=W. This suggests that all the net heat comes in through the top isotherm. In fact, all of the heat which comes in through the left isochore comes out through the right isochore: since the top isotherm is all at the same warmer temperature TH and the bottom isotherm is all at the same cooler temperature TC , and since change in energy for an isochore is proportional to change in temperature, then all of the heat coming in through the left isochore is cancelled out exactly by the heat going out the right isochore.

11.3.4

State functions and entropy

where TL is the lowest cycle temperature and TH the high- If Z is a state function then the balance of Z remains unest. For Carnot power cycles the coefficient of performance changed during a cyclic process: for a heat pump is: I dZ = 0 TL COP = 1 + TH − TL Entropy is a state function and is defined as and for a refrigerator the coefficient of performance is: S= COP =

TL TH − TL

Q T

so that

The second law of thermodynamics limits the efficiency and COP for all cyclic devices to levels at or below the ∆Q Carnot efficiency. The Stirling cycle and Ericsson cycle are ∆S = T two other reversible cycles that use regeneration to obtain isothermal heat transfer. then it is clear that for any cyclic process, Stirling cycle

I

I dS =

Main article: Stirling cycle

dQ =0 T

meaning that the net entropy change over a cycle is 0.

242

11.3.5

CHAPTER 11. CHAPTER 11. FUNDAMENTALS

See also

• Entropy • Economizer

11.3.6

References

[1] Cengel, Yunus A.; Boles, Michael A. (2002). Thermodynamics: an engineering approach. Boston: McGraw-Hill. p. 14. ISBN 0-07-238332-1. [2] Cengel, Yunus A.; Boles, Michael A. (2002). Thermodynamics: an engineering approach. Boston: McGraw-Hill. pp. 452. ISBN 0-07-238332-1.

11.3.7

Further reading

• Halliday, Resnick & Walker. Fundamentals of Physics, 5th edition. John Wiley & Sons, 1997. Chapter 21, Entropy and the Second Law of Thermodynamics. • Çengel, Yunus A., and Michael A. Boles. Thermodynamics: An Engineering Approach, 7th ed. New York: McGraw-Hill, 2011. Print. • Hill and Peterson. “Mechanics and Thermodynamics of Propulsion”, 2nd ed. Prentice Hall, 1991. 760 pp.

11.3.8

External links

Chapter 12

Text and image sources, contributors, and licenses 12.1 Text • Thermodynamics Source: https://en.wikipedia.org/wiki/Thermodynamics?oldid=715999458 Contributors: Bryan Derksen, Stokerm, Andre Engels, Danny, Miguel~enwiki, Roadrunner, Jdpipe, Heron, Arj, Olivier, Ram-Man, Michael Hardy, Tim Starling, Kku, Menchi, Jedimike, TakuyaMurata, Dgrant, Looxix~enwiki, Ahoerstemeier, CatherineMunro, Glenn, Victor Gijsbers, Jeff Relf, Mxn, Smack, Ehn, Tantalate, Reddi, Lfh, Peregrine981, Eadric, Miterdale, Phys, Fvw, Raul654, Seherr, Mjmcb1, Lumos3, RadicalBender, Rogper~enwiki, Robbot, R3m0t, Babbage, Moink, Hadal, Fuelbottle, Quadalpha, Seth Ilys, Diberri, Ancheta Wis, Giftlite, Mshonle~enwiki, N12345n, Lee J Haywood, Monedula, Wwoods, Dratman, Curps, Michael Devore, Bensaccount, Abqwildcat, Macrakis, Foobar, Physicist, Louis Labrèche, Daen, Antandrus, BozMo, OverlordQ, Karol Langner, APH, H Padleckas, Icairns, Monn0016, Sam Hocevar, MulderX, Agro r, Edsanville, Klemen Kocjancic, Mike Rosoft, Poccil, CALR, EugeneZelenko, Masudr, Llh, Vsmith, Jpk, Pavel Vozenilek, Dmr2, Bender235, Eric Forste, Pmetzger, El C, Hayabusa future, Femto, CDN99, Bobo192, Jung dalglish, SpeedyGonsales, Sasquatch, MPerel, Helix84, Haham hanuka, Pearle, Jumbuck, Ixfalia, Alansohn, Gary, Dbeardsl, Atlant, PAR, Cdc, Malo, Cortonin, Wtmitchell, NAshbery, Docboat, Jheald, Gene Nygaard, Falcorian, Zntrip, Alyblaith, Miaow Miaow, Uncle G, Plek, Carcharoth, Kzollman, Jwulsin, Sympleko, Pkeck, Tylerni7, Jwanders, Keta, Mido, Cbdorsett, Dzordzm, Frankie1969, Prashanthns, Mandarax, Graham87, Jclemens, Melesse, Rjwilmsi, DrTorstenHenning, SMC, Ligulem, Dar-Ape, JohnnoShadbolt, Sango123, Dyolf Knip, Titoxd, FlaBot, MacRusgail, RexNL, Jrtayloriv, Lynxara, Thecurran, Srleffler, Chobot, DVdm, Bgwhite, Roboto de Ajvol, The Rambling Man, Siddhant, RobotE, Pip2andahalf, Sillybilly, Anonymous editor, Anubis1975, JabberWok, Casey56, Wavesmikey, Stephenb, Okedem, The1physicist, CambridgeBayWeather, Rsrikanth05, Wiki alf, Hagiographer, UDScott, Nick, Dhollm, Abb3w, DeadEyeArrow, Ms2ger, Spinkysam, Enormousdude, Lt-wiki-bot, Arthur Rubin, Pb30, KGasso, MaNeMeBasat, Banus, RG2, Bo Jacoby, DVD R W, That Guy, From That Show!, Quadpus, Luk, ChemGardener, Vanka5, A13ean, SmackBot, Aim Here, Bobet, C J Cowie, Sounny, Bomac, Jagged 85, Onebravemonkey, Sundaryourfriend, Gilliam, Hmains, Skizzik, ThorinMuglindir, Saros136, Bluebot, Bduke, Silly rabbit, SchfiftyThree, Complexica, DHN-bot~enwiki, Antonrojo, Stedder, Sholto Maud, EvelinaB, HGS, Nakon, Lagrangian, Dreadstar, Richard001, Hammer1980, BryanG, Jklin, DMacks, Sadi Carnot, Kukini, SashatoBot, Ocee, ML5, CatastrophicToad~enwiki, JoseREMY, Nonsuch, Pflatau, Ben Moore, CyrilB, Frokor, Tasc, Beetstra, Waggers, Ζεύς, Funnybunny, Negrulio, Peyre, Ejw50, Lottamiata, Shoeofdeath, Mattmaccourt, Ivy mike, Moocowisi, Tawkerbot2, Dlohcierekim, Daniel5127, Deathcrap, Spudcrazy, Meisam.fa, CRGreathouse, Dycedarg, Scohoust, Albert.white, TVC 15, Ruslik0, Dgw, McVities, MarsRover, Freedumb, Casper2k3, Grj23, Cydebot, Gtxfrance, Rifleman 82, Bazzargh, Miketwardos, Shirulashem, Tpot2688, Omicronpersei8, Freak in the bunnysuit, Thijs!bot, MuTau, Barticus88, Bill Nye the wheelin' guy, Coelacan, Knakts, Kablammo, Headbomb, Pjvpjv, Gerry Ashton, James086, D.H, Stannered, Spud Gun, Austin Maxwell, AntiVandalBot, Gioto, Luna Santin, Jnyanydts, FrankLambert, Dylan Lake, JAnDbot, MER-C, Matthew Fennell, Acroterion, Lidnariq, Bongwarrior, VoABot II, JNW, Indon, Loonymonkey, User A1, Pax:Vobiscum, Oneileri, A666666, Jtir, BetBot~enwiki, Mermaid from the Baltic Sea, NAHID, Rettetast, Ravichandar84, R'n'B, LittleOldMe old, Mausy5043, Ludatha, Rhinestone K, Uncle Dick, Maurice Carbonaro, Yonidebot, Brien Clark, Ian.thomson, Dispenser, Katalaveno, MikeEagling, Notreallydavid, AntiSpamBot, Wariner, NewEnglandYankee, Nwbeeson, Ontarioboy, Rumpelstiltskin223, WilfriedC, KylieTastic, Bob, Joshmt, Lyctc, Vagr7, Biff Laserfire, CA387, Idioma-bot, Funandtrvl, VolkovBot, Macedonian, Orthologist, Philip Trueman, TXiKiBoT, Rei-bot, Anonymous Dissident, Sankalpdravid, Baatarchuluun~enwiki, Qxz, Anna Lincoln, CaptinJohn, Sillygoosemo, JhsBot, Leafyplant, Jackfork, Psyche825, Nny12345, Zion biñas, Appieters, Whbstare, Enigmaman, Sploonie, Synthebot, Falcon8765, Enviroboy, Phmoreno, A Raider Like Indiana, Furious.baz, SvNH, Jianni, EmxBot, Kbrose, Arjun024, SieBot, Damorbel, Paradoctor, Jason Patton, LeadSongDog, JerrySteal, Hoax user, Ddsmartie, Bentogoa, Happysailor, Flyer22 Reborn, Dhatfield, BrianGregory86, Oxymoron83, Antonio Lopez, CultureShock582, OKBot, Correogsk, Mygerardromance, Hamiltondaniel, JL-Bot, Tomasz Prochownik, Loren.wilton, ClueBot, Namasi, The Thing That Should Not Be, DesertAngel, Taroaldo, Therealmilton, Pak umrfrq, Kdruhl, LizardJr8, Whoever101, ChandlerMapBot, Notburnt, GrapeSmuckers, Aua, Djr32, Jusdafax, LaosLos, Chrisban0314, Pmronchi, Eeekster, Lartoven, Brews ohare, NuclearWarfare, Jotterbot, PhySusie, Scog, Sidsawsome, SoxBot, Razorflame, DEMOLISHOR, CheddarMan, Aitias, Dank, MagDude101, Galor612, Cableman1112, SoxBot III, RexxS, Faulcon DeLacy, Spitfire, Shres58tha, Avoided, Snapperman2, Thatguyflint, Mls1492, Thebestofall007, Addbot, Power.corrupts, DOI bot, Morri028, DougsTech, Patrosnoopy, Glane23, Bob K31416, Numbo3-bot, Landofthedead2, Lightbot, OlEnglish, Gatewayofintrigue, Ben Ben, Luckas-bot, Yobot, THEN WHO WAS PHONE?, Bos7, QueenCake, IW.HG, Magog the Ogre, AnomieBOT, Paranoidhuman, IncidentalPoint, Daniele Pugliesi, Jim1138, Flewis, Materialscientist, Celtis123, Citation bot, Fredde 99, LilHelpa, Xqbot, Ad-

243

244

CHAPTER 12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

dihockey10, Capricorn42, Fireballxyz, - ), Almabot, GrouchoBot, Omnipaedista, Jezhotwells, Waleswatcher, Logger9, Twested, Chjoaygame, FrescoBot, VS6507, Wallyau, Petr10, Galorr, WikiCatalogEdit701, Sae1962, Denello, Neutiquam, HamburgerRadio, Citation bot 1, Pinethicket, HRoestBot, 10metreh, Jonesey95, Calmer Waters, Thermo771, RedBot, MastiBot, Serols, TobeBot, Yunshui, CathySc, Thermodynoman, Thomas85127, Myleneo, Schmei, Brian the Editor, Unbitwise, Sundareshan, DARTH SIDIOUS 2, TjBot, Beyond My Ken, EmausBot, Orphan Wiki, Domesticenginerd, WikitanvirBot, Obamafan70, AriasFco, Helptry, Racerx11, GoingBatty, Nag 08, Your Lord and Master, Weleepoxypoo, Wikipelli, Dcirovic, K6ka, John of Lancaster, Hhhippo, Checkingfax, Traxs7, Shannon1, Azuris, H3llBot, Libb Thims, Wayne Slam, Tolly4bolly, Vanished user fois8fhow3iqf9hsrlgkjw4tus, EricWesBrown, Mayur, Donner60, Jbergste, DennisIsMe, Haiti333, Hazard-Bot, ChuispastonBot, Levis ken, Matewis1, LaurentRDC, 28bot, Sonicyouth86, Anshul173, ClueBot NG, Coverman6, Piast93, Chester Markel, Andreas.Persson, Chronic21, Jj1236, Duciswrong1234, Suresh 5, Widr, The Troll lolololololololol, NuclearEnergy, Helpful Pixie Bot, Calabe1992, DBigXray, Nomi12892, Necatikaval, BG19bot, Xonein, Krenair, BeRo999, Fedor Babkin, PTJoshua, Balajits93, Defladamouse, MusikAnimal, Metricopolus, Ushakaron, Mariano Blasi, CitationCleanerBot, Hollycliff, Zedshort, Asaydjari, Blodslav, Nascar90210, DarafshBot, Adwaele, Kaslu.S, Dexbot, Duncanpark, Joeljoeljoel12345, Czforest, Josophie, Miyangoo, Beans098, Reatlas, Rejnej, Nerlost, Epicgenius, Georgegeorge127, Deadmau8****, I am One of Many, Harlem Baker Hughes, Dakkagon, DavidLeighEllis, Vinodhchennu, Ugog Nizdast, Prokaryotes, Eff John Wayne, Ginsuloft, Bubba58, Nanapanners, Fortuna Imperatrix Mundi, Hknaik1307, Monkbot, Horseless Headman, Codebreaker1999, BTHB2010, Bunlip, Zirus101, Xmlhttp.readystate, Crystallizedcarbon, Qpdatabase, Jayashree1203, Youlikeman, JellyPatotie, Loveusujeet, Isambard Kingdom, CV9933, Nashrudin13l, Supdiop, The Collapsation of The Sensation, KasparBot, CabbagePotato, Amangautam1995, Лагічна рэвалюцыйны, Ravi.dhami.234, Bishwajeet Panda, HenryGroupman, CaptainSirsir, Dctfgijkm, Paragnar, Spinrade, Downingk9711, K Sikdar and Anonymous: 898 • Statistical mechanics Source: https://en.wikipedia.org/wiki/Statistical_mechanics?oldid=713626783 Contributors: The Cunctator, Derek Ross, Bryan Derksen, The Anome, Ap, Miguel~enwiki, Peterlin~enwiki, Edward, Patrick, Michael Hardy, Tim Starling, Den fjättrade ankan~enwiki, Bogdangiusca, Mxn, Charles Matthews, Phys, Nnh, Eman, Fuelbottle, Isopropyl, Cordell, Ancheta Wis, Giftlite, Andries, Mikez, Monedula, Alison, Tweenk, John Palkovic, Karol Langner, APH, Karl-Henner, Edsanville, Michael L. Kaufman, Chris Howard, Brianjd, Bender235, Elwikipedista~enwiki, Linuxlad, Jumbuck, Ryanmcdaniel, BryanD, PAR, Jheald, Woohookitty, Linas, StradivariusTV, Kzollman, Pol098, Mpatel, SDC, DaveApter, Nanite, Rjwilmsi, HappyCamper, FlaBot, Margosbot~enwiki, Gurch, Fephisto, GangofOne, Sanpaz, YurikBot, Wavelength, The.orpheus, DiceDiceBaby, JabberWok, Brec, Mary blackwell, Dhollm, E2mb0t~enwiki, Aleksas, Teply, That Guy, From That Show!, SmackBot, Pavlovič, Charele, Jyoshimi, Weiguxp, David Woolley, Edgar181, Drttm, Steve Omohundro, Skizzik, DMTagatac, ThorinMuglindir, Kmarinas86, Bluebot, MK8, Complexica, Sbharris, Wiki me, Phudga, Radagast83, RandomP, G716, Sadi Carnot, Yevgeny Kats, Lambiam, Chrisch, Frokor, Mets501, Politepunk, Iridescent, IvanLanin, Daniel5127, Van helsing, Djus, Mct mht, Cydebot, Forthommel, Boardhead, Dancter, Joyradost, Christian75, Abtract, Thijs!bot, Headbomb, Spud Gun, Samkung, Alphachimpbot, Perelaar, Chandraveer, JAnDbot, Yill577, Magioladitis, WolfmanSF, VoABot II, Dirac66, Jorgenumata, Peabeejay, SimpsonDG, Lantonov, Sheliak, Gerrit C. Groenenboom, VolkovBot, Scorwin, LokiClock, The Original Wildbear, Agricola44, Moondarkx, Locke9k, PhysPhD, Anoko moonlight, Kbrose, SieBot, Damorbel, LeadSongDog, Melcombe, StewartMH, Apuldram, Plastikspork, Razimantv, Mild Bill Hiccup, Davennmarr, Vql, Lyonspen, Djr32, CohesionBot, Brews ohare, Mlys~enwiki, Doprendek, SchreiberBike, Thingg, Edkarpov, Qwfp, JKeck, Koumz, TravisAF, Truthnlove, Addbot, Xp54321, DOI bot, Wickey-nl, Looie496, Netzwerkerin, , SPat, Gail, Loupeter, Yobot, Ht686rg90, TaBOT-zerem, ^musaz, Xqbot, P99am, ChristopherKingChemist, Charvest, Hlfhjwlrdglsp, Baz.77.243.99.32, Anterior1, Jonesey95, RjwilmsiBot, Pullister, EmausBot, Dcirovic, Michael assis, JSquish, ZéroBot, Wikfr, AManWithNoPlan, Kyucasio, Hpubliclibrary, Keulian, Rashhypothesis, IBensone, RockMagnetist, EdoBot, Amviotd, ClueBot NG, CocuBot, Landregn, Frietjes, Theopolisme, Helpful Pixie Bot, Mulhollant, Robwf, PhnomPencil, Op47, Acmedogs, F=q(E+v^B), JZCL, Roshan220195, Egm4313.s12, Illia Connell, Dexbot, Mogism, Mark viking, Alefbenedetti, W. P. Uzer, KeithFratus, Michael Lee Baker, PhilippeTilly, ԱշոտՏՆՂ, Udus97, Scientific Adviser, Izkala, VexorAbVikipædia, Dymaio, KasparBot, Spinrade, JosiahWilard, Gray76007600 and Anonymous: 161 • Chemical thermodynamics Source: https://en.wikipedia.org/wiki/Chemical_thermodynamics?oldid=704728105 Contributors: Jdpipe, Selket, Jeffq, Robbot, Giftlite, H Padleckas, Icairns, Discospinster, Vsmith, Nk, Alansohn, PAR, Count Iblis, LukeSurl, StradivariusTV, Jeff3000, Ketiltrout, Srleffler, Sanguinity, Dhollm, Arthur Rubin, Elfer~enwiki, Itub, SmackBot, Fuzzform, MalafayaBot, Hallenrm, SteveLower, Sadi Carnot, JzG, Beetstra, Optakeover, Myasuda, AndrewHowse, Astrochemist, ErrantX, Thijs!bot, Barticus88, Headbomb, Marek69, D.H, User A1, Thermbal, AtholM, Avitohol, Yuorme, Thisisborin9, Philip Trueman, The Original Wildbear, Seb az86556, Damorbel, Caltas, ClueBot, The Thing That Should Not Be, Ectomaniac, DragonBot, Excirial, Tnxman307, SchreiberBike, Avoided, Ronhjones, LaaknorBot, EconoPhysicist, Bwrs, Legobot, Luckas-bot, Yobot, Gdewilde, Daniele Pugliesi, Unara, Materialscientist, The High Fin Sperm Whale, Citation bot, J G Campbell, GrouchoBot, Bellerophon, ‫قلی زادگان‬, Stratocracy, FrescoBot, Wikipe-tan, StaticVision, Galorr, Citation bot 1, Russot1, IncognitoErgoSum, RenamedUser01302013, Wikipelli, ClueBot NG, Gilderien, NuclearEnergy, Helpful Pixie Bot, BG19bot, Mn-imhotep, JYBot, Notebooktheif, The Herald, Citrusbowler, Billyjeanisalive1995, Monkbot, Shreyas murthy and Anonymous: 76 • Equilibrium thermodynamics Source: https://en.wikipedia.org/wiki/Equilibrium_thermodynamics?oldid=693963750 Contributors: Quadalpha, Karol Langner, Pjacobi, Vsmith, ChrisChiasson, Wavesmikey, Dhollm, Sadi Carnot, Alphachimpbot, OKBot, Daniele Pugliesi, ‫قلی زادگان‬, Chjoaygame, EmausBot, ZxxZxxZ, Czforest and Anonymous: 3 • Non-equilibrium thermodynamics Source: https://en.wikipedia.org/wiki/Non-equilibrium_thermodynamics?oldid=716464676 Contributors: The Anome, Toby Bartels, Miguel~enwiki, SimonP, Michael Hardy, Kku, William M. Connolley, Phys, Aetheling, Tea2min, Waltpohl, Karol Langner, Mike Rosoft, Chris Howard, Bender235, Mdd, PAR, Oleg Alexandrov, Linas, Mandarax, Rjwilmsi, Michielsen, Mathbot, Physchim62, ChrisChiasson, Gwernol, Wavesmikey, Jugander, Ozarfreo, Dhollm, SmackBot, WebDrake, Bluebot, Complexica, Jbergquist, Sadi Carnot, JarahE, NonDucor, Cydebot, X14n, Boardhead, Mirrormundo, Miketwardos, D4g0thur, HappyInGeneral, Headbomb, Juchoy, Mythealias, GuidoGer, R'n'B, AgarwalSumeet, Unauthorised Immunophysicist, Lseixas, TXiKiBoT, Xdeh, Zhenqinli, Kbrose, Burhan Salay, Mihaiam~enwiki, Eug373, XLinkBot, Nathan Johnson, Addbot, Favonian, Yobot, Tamtamar, AnomieBOT, Materialscientist, Citation bot, Yrogirg, ‫قلی زادگان‬, Nerdseeksblonde, Chjoaygame, Sinusoidal, Citation bot 1, Loudubewe, RedBot, DrProbability, Thermoworld, Tranh Nguyen, RjwilmsiBot, Massieu, ZéroBot, TyA, Ems2715, ThePowerofX, Gary Dee, Snotbot, X-men2011, Bernhlav, MerlIwBot, Helpful Pixie Bot, 7methylguanosine, Bibcode Bot, BG19bot, Mn-imhotep, Taylanmath, Pfd1986, Cyberbot II, Laberkiste, Adwaele, JYBot, Duncanpark, Lebon-anthierens, Mimigdal, Yardimsever, Campo246, Kogge, Annakremen, Ssmmachen, JosiahWilard, WandaLan and Anonymous: 53 • Zeroth law of thermodynamics Source: https://en.wikipedia.org/wiki/Zeroth_law_of_thermodynamics?oldid=707169471 Contributors: The Anome, Michael Hardy, Tim Starling, Ellywa, Victor Gijsbers, Reddi, Wik, Jeepien, Fibonacci, Sokane, Raul654, Bkell, Seth Ilys, Cutler, Alan

12.1. TEXT

245

Liefting, Giftlite, Binadot, Dissident, Marcika, Jason Quinn, Robert Brockway, Karol Langner, Asbestos, Cinar, M1ss1ontomars2k4, DanielJanzon~enwiki, Pjacobi, Paul August, Bender235, Pt, Ntmatter, Duk, Nk, Llywelyn, Wrs1864, Pearle, Alansohn, PAR, Rgeldard, Kdau, BDD, Miaow Miaow, Tutmosis, Palica, Yurik, Jehochman, Fresheneesz, Chobot, YurikBot, Splintercellguy, NTBot~enwiki, Wavesmikey, SCZenz, Dhollm, E2mb0t~enwiki, Syrthiss, Kortoso, Bota47, TheMadBaron, Theda, Kwyjibear, NetRolller 3D, SmackBot, InverseHypercube, Neptunius, Knowhow, Müslimix, ThorinMuglindir, MalafayaBot, DHN-bot~enwiki, Tsca.bot, Sholto Maud, Chlewbot, Quadparty, Cybercobra, Richard001, Marosszék, Sadi Carnot, Lambiam, Wikipedialuva, Frokor, Dicklyon, Ginkgo100, K, Hyperquantization, Achoo5000, Equendil, Kareemjee, Astrochemist, Meno25, Ring0, Odie5533, Christian75, Hernlund, Mawfive, Headbomb, Pfranson, Widefox, JAnDbot, JamesBWatson, WLU, Anaxial, Yonidebot, SubwayEater, DorganBot, Gpetrov, Funandtrvl, ACSE, Amikake3, Lear’s Fool, Davwillev, Wenli, Anna512, Spinningspark, Derek Iv, Zebas, Kbrose, SieBot, Tresiden, Revent, Jojalozzo, Tombomp, OKBot, Svick, ClueBot, Wikijens, Djr32, Excirial, Alexbot, Estirabot, Sun Creator, La Pianista, MigFP, Nathan Johnson, Addbot, Tcncv, Metagraph, Chamal N, Lightbot, Zorrobot, Luckas-bot, Yobot, THEN WHO WAS PHONE?, AnomieBOT, Kingpin13, Materialscientist, Xqbot, Eddy 1000, GrouchoBot, Omnipaedista, Brandon5485, Markorajendra, Sheeson, Much noise, Chjoaygame, Dgyeah, Sławomir Biały, Nobleness of Mind, Fortesque666, Sundareshan, Korech, Devper94, EmausBot, WikitanvirBot, KurtLC, Wikipelli, ZéroBot, Makecat, Psychokinetic, Puffin, ClueBot NG, Krouge, Helpful Pixie Bot, Art and Muscle, Cognitivecarbon, Savarona1, MusikAnimal, Ushakaron, Rs2360, Aisteco, Maxair215, Cup o' Java, SoledadKabocha, Vinayak 1995, Eli4ph, Zmicier P., Noyster, PhoenixPub, Dr Marmilade, Hunteroid, RegistryKey, Captain Chesapeake and Anonymous: 133 • First law of thermodynamics Source: https://en.wikipedia.org/wiki/First_law_of_thermodynamics?oldid=709496176 Contributors: Tarquin, XJaM, Heron, Jebba, JWSchmidt, Glenn, Cherkash, Reddi, Gutsul, Giftlite, Geni, Karol Langner, Icairns, Cinar, Discospinster, Pjacobi, Vsmith, Dave souza, Bender235, ESkog, Pjrich, Marx Gomes, Shanes, Smalljim, AtomicDragon, Pazouzou, Helix84, Orzetto, Alansohn, Arthena, PAR, Jheald, Count Iblis, Kazvorpal, KTC, SmthManly, ChrisNoe, Zealander, WadeSimMiser, Hdante, Mandarax, Rjwilmsi, The wub, Fish and karate, Gurch, Fresheneesz, SteveBaker, Srleffler, Mcavoys, Flying Jazz, ChrisChiasson, DVdm, Bgwhite, YurikBot, 4C~enwiki, Arado, Wavesmikey, Stephenb, NawlinWiki, ZacBowling, Dhollm, DeadEyeArrow, Nescio, Calaschysm, Sharkb, BorgQueen, JuJube, Pifvyubjwm, JDspeeder1, Mejor Los Indios, Vojta2, Sbyrnes321, Itub, SmackBot, Aido2002, Philx, Rex the first, McGeddon, Gilliam, The Gnome, Keegan, Complexica, Sadads, Gracenotes, VMS Mosaic, Marosszék, Sadi Carnot, Loodog, AstroChemist, Nonsuch, Pflatau, Wikster72, 2T, Iridescent, K, TwistOfCain, IvanLanin, Charlieb003, Rhetth, Mikiemike, Makeemlighter, Equendil, Cydebot, Kareemjee, Astrochemist, Meno25, Christian75, DumbBOT, LeBofSportif, Headbomb, BirdKr, EdJohnston, Perpetual motion machine, Pgagge, Luna Santin, MichaelHenley, TimVickers, Qwerty Binary, Zidane tribal, JAnDbot, Narssarssuaq, MER-C, Matthew Fennell, Dr mindbender, Acroterion, Easchiff, Askeyca, Bongwarrior, Wikidudeman, Usien6, Dirac66, TheBusiness, Hbent, Valthalas, Stafo86, Pharaoh of the Wizards, Abecedare, McDScott, Akmunna, Littlecanargie, Sunderland06, VolkovBot, TXiKiBoT, NPrice, JhsBot, LeaveSleaves, Cremepuff222, Venny85, Koen Van de moortel~enwiki, Blurpeace, Logan, Kbrose, SieBot, Caltas, Adamaja456, Happysailor, CombatCraig, Belinrahs, Momo san, Cloudjunkie, Shally87, ClueBot, Mild Bill Hiccup, NewYorkDreams, NuclearWarfare, PhySusie, TCGrenfell, MigFP, Erodium, Nathan Johnson, Wertuose, Gonfer, Addbot, Rishabhgoel, Lightbot, PV=nRT, Echinoidea, Luckas-bot, Yobot, Fraggle81, TaBOT-zerem, GMTA, Worm That Turned, AnomieBOT, Jim1138, Materialscientist, 90 Auto, ArthurBot, LilHelpa, Lh389, Xqbot, Popx3rocks, Nanog, GrouchoBot, ‫حامد میرزاحسینی‬, Waleswatcher, A. di M., Chjoaygame, FrescoBot, Tobby72, Pepper, Cannolis, Redrose64, Pinethicket, I dream of horses, Jonesey95, Martinvl, Jschnur, Vincenzo Malvestuto, Piandcompany, FoxBot, Tehfu, Fox Wilson, Skk146, Tbhotch, Isrl.abel, Ripchip Bot, EmausBot, Llewkcalbyram, 478jjjz, Passionless, Netheril96, Wikipelli, JSquish, Moravveji, Thine Antique Pen, Jay-Sebastos, Mentibot, BF6-NJITWILL, 912doctorwho, Wikiwind, Spicemix, ClueBot NG, Sag010793, Widr, Helpful Pixie Bot, Novusuna, Ninja-bunny.webs, Bibcode Bot, Lowercase sigmabot, Ditto51, NZLS11, ISTB351, Hallows AG, Wiki13, MusikAnimal, Mark Arsten, Ushakaron, Mn-imhotep, Rs2360, Zedshort, Glacialfox, Anbu121, Therealrockstar007, Coldestgecko, Alchemice, Keitam, Pmmanley, Fatimah M, Arcandam, Adwaele, Webclient101, Mogism, Ninjamen1234, Zmicier P., Chessmad1, Nerlost, Babitaarora, Zenibus, PhoenixPub, JaconaFrere, Ordessa, Isambard Kingdom, Timothya101., Captain Chesapeake, Das O2, Kolaberry, Awyeahlol, Klaus Schmidt-Rohr and Anonymous: 326 • Second law of thermodynamics Source: https://en.wikipedia.org/wiki/Second_law_of_thermodynamics?oldid=715820238 Contributors: The Anome, Jeronimo, XJaM, Roadrunner, Jdpipe, David spector, Lorenzarius, Michael Hardy, Ixfd64, Ahoerstemeier, Cyp, Theresa knott, Snoyes, Jebba, AugPi, Cherkash, Ilyanep, Tantalate, Reddi, Terse, Tb, Timc, IceKarma, DJ Clayworth, Tpbradbury, Phys, Omegatron, Marc Girod~enwiki, Jeffq, ScienceGuy, ChrisO~enwiki, Fredrik, Romanm, Gandalf61, Postdlf, Ashley Y, Sunray, Hadal, Robinh, Johnstone, Cutler, Tea2min, Stirling Newberry, Giftlite, ComaVN, N12345n, Karn, FunnyMan3595, Curps, FeloniousMonk, Chinasaur, Dav4is, Duncharris, Bobblewik, Edcolins, LiDaobing, Pcarbonn, Antandrus, Eroica, Ravikiran r, Kaldari, Jossi, Karol Langner, Wikimol, Rdsmith4, Panzi, Sam Hocevar, Neutrality, Ratiocinate, Trevor MacInnis, Grstain, DanielCD, Brianhe, Rich Farmbrough, KillerChihuahua, Rhobite, Pjacobi, Vsmith, ArnoldReinhold, Dave souza, Ivan Bajlo, Number 0, Bender235, Ignignot, Sietse Snel, Euyyn, SteveCoast, Bobo192, I9Q79oL78KiL0QTFHgyc, Aquillion, Nk, Maebmij, Helix84, AppleJuggler, Cpcjr, Jason One, Kingsindian, Zenosparadox, Arthena, Mineralogy, PAR, Wtshymanski, Evil Monkey, Jheald, Count Iblis, Dominic, Pauli133, Alai, KTC, Oleg Alexandrov, Crosbiesmith, ChrisNoe, Madmardigan53, Miaow Miaow, Keta, Denevans, Funhistory, Christopher Thomas, Gerbrant, GSlicer, Kbdank71, Nanite, Rjwilmsi, Koavf, Eyu100, Yamamoto Ichiro, Nihiltres, Dantecubed, Fresheneesz, Srleffler, Jittat~enwiki, Chobot, Flying Jazz, ChrisChiasson, Wavelength, Michaeladenner, RobotE, Hairy Dude, Bobby1011, Wavesmikey, Akamad, Stephenb, Gaius Cornelius, CambridgeBayWeather, Aeusoes1, SCZenz, Ragesoss, Dhollm, Abb3w, Mgrierson, Dna-webmaster, WAS 4.250, Enormousdude, 2over0, Dieseldrinker, Arthur Rubin, BorgQueen, Ilmari Karonen, Fluent aphasia, Profero, Infinity0, Sbyrnes321, DVD R W, Knowledgeum, Luk, SmackBot, Cirejcon, Ashenai, ChXu, CarbonCopy, McGeddon, Palinurus, WebDrake, Jim62sch, David Shear, Neptunius, Adrian232, Gunnar.Kaestle, Lsommerer, Bmord, Jab843, Yamaguchi , Gilliam, The Gnome, ThorinMuglindir, Kmarinas86, Chris the speller, Bduke, Tisthammerw, MalafayaBot, Complexica, Bonaparte, Desp~enwiki, Zmanish, Verrai, Ben Rogers, Sholto Maud, Andyparkins, H-J-Niemann, EPM, Dreadstar, DMacks, Henning Makholm, Sadi Carnot, Mikaduki, Zchenyu, AThing, Miftime, Rklawton, Doanison, JorisvS, Mgiganteus1, Nonsuch, IronGargoyle, AwesomeMachine, Stikonas, MrArt, Peyre, Xionbox, Astrobradley, Dan Gluck, Seqsea, K, Michaelbusch, CzarB, Kommando797, Spk ben, George100, Tubbyspencer, Josedanielc, Mikiemike, Ale jrb, Wafulz, AlbertSM, Father Ignatius, Jucati, Emilio Juanatey, Myasuda, Cydebot, Rifleman 82, Meno25, Ring0, Miguel de Servet, Michael C Price, DumbBOT, JodyB, Spookpadda, Daa89563, LeBofSportif, DMZ, Headbomb, Marek69, John254, EdJohnston, AntiVandalBot, Widefox, Gökhan, Canadian-Bacon, Narssarssuaq, MER-C, Physical Chemist, Acroterion, Meeples, Magioladitis, VoABot II, Mbarbier, Hubbardaie, Daarznieks, Dirac66, Hbent, Heqwm, Tercer, Wkussmaul, Jtir, Aeternium, Hweimer, R'n'B, Mbweissman, Time traveller, J.delanoy, Pharaoh of the Wizards, Musaran, Ian.thomson, Bluecheese333, Salih, LordAnubisBOT, Frisettes, Stootoon, Ppithermo, VolkovBot, Larryisgood, Joeoettinger, ABF, Speaker to Lampposts, JayEsJay, Rei-bot, Anonymous Dissident, Michael H 34, LeaveSleaves, Natg 19, Maxim, Antixt, Enviroboy, San Diablo, Zebas, Kbrose, Subh83, SieBot, YonaBot, BotMultichill, Dawn Bard, Caltas, Jewk, Crash Underride, Arjun r acharya, Discrete,

246

CHAPTER 12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

Nrsmith, Jdaloner, Barry Fruitman, Sunrise, Denisarona, Vanished user qkqknjitkcse45u3, ClueBot, PaulLowrance, The Thing That Should Not Be, Wisemove, Hjlim, Bbanerje, Lbrewer42, LizardJr8, LonelyBeacon, Manishearth, Jimbomonkey, Nymf, Simonmckenzie, Wndl42, Estirabot, Sun Creator, Laughitup2, Nafis ru, AC+79 3888, Crowsnest, DumZiBoT, Darkicebot, AP Shinobi, BodhisattvaBot, Jovianeye, Lilmy13, Gonfer, Subversive.sound, Aunt Entropy, MystBot, Addbot, Magus732, Jncraton, Ashanda, MrOllie, CarsracBot, Favonian, Tide rolls, Lightbot, Gatewayofintrigue, Teles, Arbitrarily0, Hartz, Luckas-bot, Yobot, Fraggle81, Sanyi4, Egbertus, AnomieBOT, Rubinbot, Jim1138, Jacob2718, Materialscientist, Citation bot, Chemeditor, LilHelpa, Xqbot, Nanog, Aa77zz, GrouchoBot, ChristopherKingChemist, Rhettballew, Waleswatcher, Sin.pecado, Chjoaygame, FrescoBot, Tobby72, JMS Old Al, D'ohBot, RWG00, Tomerbot, Vh mby, Citation bot 1, PigFlu Oink, Pinethicket, I dream of horses, Jonesey95, AnandaDaldal, Serols, Alfredwongpuhk, ‫کاشف عقیل‬, Howzeman, Klangenfurt, Naji Khaleel, Yappy2bhere, LoStrangolatore, RjwilmsiBot, Ptbptb, Aircorn, EmausBot, John of Reading, Lea phys, Da500063, Netheril96, Dcirovic, Arjun S Ariyil, Evanh2008, JSquish, John Cline, Bollyjeff, Mattedia, Kenan82, Tls60, WikiPidi, Ems2715, BF6-NJITWILL, Spicemix, Rocketrod1960, ClueBot NG, Snoid Headly, Jj1236, Mormequill, Widr, WikiPuppies, Helpful Pixie Bot, Bibcode Bot, Lowercase sigmabot, BG19bot, Savarona1, Cdh1001, Ugncreative Usergname, Glevum, Rs2360, Crio, Rowan Adams, Pratyya Ghosh, LeeMcLoughlin1975, Adwaele, Mdkssner, Jchammel, Pterodactyloid, Lugia2453, Zmicier P., Jochen Burghardt, Reatlas, Nerlost, Nicksola, Glenn Tamblyn, Mre env, The-vegan-muser, Aspro89, Prokaryotes, Nakitu, PhoenixPub, Ammamaretu, Skr15081997, Burnandquiver, Monkbot, Douglas Cotton, Wiki jeri, IagoQnsi, Trackteur, Tylerleeredd, Theeditinprogress, BiologicalMe, Jorge Guerra Pires, KH-1, Crystallizedcarbon, Yusefghouth, Captain Chesapeake, CAPTAIN RAJU, Klaus Schmidt-Rohr and Anonymous: 561 • Third law of thermodynamics Source: https://en.wikipedia.org/wiki/Third_law_of_thermodynamics?oldid=709781075 Contributors: The Anome, XJaM, Cherkash, Rob Hooft, Reddi, Stismail, Grendelkhan, Vamos, Fredrik, Guy Peters, Cutler, Giftlite, Smjg, Tom harrison, Everyking, Ned Morrell, Karol Langner, D6, Pjacobi, Bender235, Duk, Helix84, Keenan Pepper, Andrewpmk, PAR, Jheald, Gene Nygaard, Miaow Miaow, SeventyThree, Nanite, Chobot, YurikBot, Chris Capoccia, Wavesmikey, Okedem, Salsb, SCZenz, Dhollm, E2mb0t~enwiki, Tony1, CWenger, Sbyrnes321, McGeddon, Unyoyega, Gilliam, Sandycx, Colonies Chris, Malosse, Rrburke, Marosszék, BZegarski, Sadi Carnot, Majorclanger, 2T, K, Richard75, Einstein runner, Astrochemist, Gogo Dodo, Ring0, Khattab01~enwiki, Dchristle, Thijs!bot, Barticus88, Widefox, MER-C, Magioladitis, Alan Holyday, Edward321, Canberra User, Masaki K, Mbweissman, Time traveller, Ssault, Olulade, CardinalDan, VolkovBot, Malinaccier, A4bot, Wolfrock, Zebas, Kbrose, Natox, SieBot, Gerakibot, Oxymoron83, OKBot, Bewporteous, Mygerardromance, WikiLaurent, TSRL, ClueBot, LAX, Wikijens, MigFP, Happysam92, Spitfire, Addbot, CarsracBot, Luckas-bot, Sanyi4, AnomieBOT, Rubinbot, Jim1138, JackieBot, Citation bot, Xqbot, Draxtreme, GrouchoBot, RibotBOT, Waleswatcher, Erik9, Chjoaygame, D'ohBot, Jonesey95, Nobleness of Mind, Hb2007, EmausBot, John of Reading, 8digits, Shuipzv3, Wmayner, Nexia asx, Spicemix, ClueBot NG, Alchemist314, Helpful Pixie Bot, Bibcode Bot, BG19bot, CityOfSilver, Bush6984, Rs2360, Zedshort, Nitcho1as12, SimmeD, Patton622, Adwaele, Cup o' Java, Cesaranieto~enwiki, Ankitdwivedimi6, FiredanceThroughTheNight, Dakkagon, Sball004, Garfield Garfield, Krishtafar, Wikixenia and Anonymous: 109 • History of thermodynamics Source: https://en.wikipedia.org/wiki/History_of_thermodynamics?oldid=713637911 Contributors: Collabi, Lumos3, Arkuat, Gandalf61, Cutler, Karol Langner, Eric Forste, PAR, Marianika~enwiki, Carcharoth, Benbest, Rjwilmsi, Ligulem, Srleffler, Chobot, Gaius Cornelius, CambridgeBayWeather, Ragesoss, Dhollm, Moe Epsilon, Rayc, Netrapt, Tropylium, SmackBot, Jagged 85, TimBentley, Colonies Chris, A.R., DMacks, Ligulembot, Mion, Sadi Carnot, Pilotguy, JzG, JorisvS, Peterlewis, Special-T, AdultSwim, Lottamiata, Myasuda, FilipeS, Gtxfrance, Doug Weller, M karzarj, Barticus88, D.H, Greg L, EdJogg, VoABot II, Cardamon, Jtir, Inwind, ElinorD, Riick, Enviroboy, Radagast3, Natox, SieBot, I Love Pi, Anchor Link Bot, Tomasz Prochownik, MCCRogers, Taroaldo, J8079s, Djr32, CohesionBot, Eeekster, XLinkBot, Saeed.Veradi, Ariconte, Kwjbot, Addbot, Lightbot, Wikkidd, Luckas-bot, Yobot, Ptbotgourou, Ajh16, AnomieBOT, Citation bot, ArthurBot, Xqbot, J04n, GrouchoBot, ChristopherKingChemist, SassoBot, Geraldo61, Fortdj33, Machine Elf 1735, Citation bot 1, TobeBot, Marie Poise, Syncategoremata, ClueBot NG, Helpful Pixie Bot, Bibcode Bot, Ludi Romani, Bfong2828, SoledadKabocha, Belief action, Nerlost, Sibyl Gray, Yikkayaya, CleanEnergyPundit and Anonymous: 32 • An Experimental Enquiry Concerning the Source of the Heat which is Excited by Friction Source: https://en.wikipedia.org/wiki/An_ Experimental_Enquiry_Concerning_the_Source_of_the_Heat_which_is_Excited_by_Friction?oldid=715176110 Contributors: Jdpipe, Dominus, Charles Matthews, Bloodshedder, Cutler, MakeRocketGoNow, Mdd, Wijnand, GregorB, Rjwilmsi, Tim!, Ligulem, Vclaw, Jaraalbe, RussBot, Dhollm, Qero, Itub, SmackBot, Localzuk, Peterlewis, Wizard191, Cydebot, Mrmrbeaniepiece, Gioto, Nyttend, PC78, TomyDuby, Inwind, Guillaume2303, Kdruhl, Good Olfactory, Airplaneman, Tassedethe, Lightbot, Citation bot, ChristopherKingChemist, ClueBot NG, Saehry, Nerlost, VexorAbVikipædia and Anonymous: 2 • Control volume Source: https://en.wikipedia.org/wiki/Control_volume?oldid=642129926 Contributors: Jdpipe, Silverfish, Rich Farmbrough, Xezbeth, Mairi, Mdd, RJFJR, Kbdank71, Mathbot, Siddhant, Matador, Dhollm, Bjs1234, Plober, Chris the speller, Bluebot, HydrogenSu, Sadi Carnot, Wanstr, Wolfram.Tungsten, STBot, FelixTheCat85, Salih, Dolphin51, Cacadril, Crowsnest, Addbot, Iwfyita, ZéroBot and Anonymous: 11 • Ideal gas Source: https://en.wikipedia.org/wiki/Ideal_gas?oldid=716419913 Contributors: SimonP, Peterlin~enwiki, Ben-Zin~enwiki, FlorianMarquardt, Patrick, Michael Hardy, Wshun, GTBacchus, Looxix~enwiki, Ellywa, Nikai, Schneelocke, Bamos, Robbot, Hankwang, Kizor, COGDEN, Soilguy3, Tea2min, Enochlau, Giftlite, Wolfkeeper, Herbee, Brona, Bensaccount, Louis Labrèche, Kraton, Karol Langner, H Padleckas, Tsemii, Edsanville, Brianjd, Pjacobi, Vsmith, Altmany, SpookyMulder, Bender235, Chewie, Nigelj, Avathar~enwiki, Nk, Keenan Pepper, PAR, Cdc, Rebroad, H2g2bob, -kkm, Gene Nygaard, BillC, GregorB, Palica, Nanite, Margospl, Chobot, ChrisChiasson, YurikBot, Hairy Dude, JabberWok, CambridgeBayWeather, Rick lightburn, D. F. Schmidt, Dhollm, Aaron Schulz, Bota47, 2over0, Aleksas, TBadger, CWenger, Paul D. Anderson, Bo Jacoby, CrniBombarder!!!, SmackBot, Kmarinas86, Chris the speller, ViceroyInterus, GregRM, MalafayaBot, Complexica, Colonies Chris, Moosesheppy, Whpq, Michael Ross, Just plain Bill, Sadi Carnot, Lambiam, Kpengboy, MTSbot~enwiki, Tawkerbot2, OlexiyO, Joelholdsworth, Cydebot, Nonagonal Spider, Headbomb, Bigbill2303, JustAGal, Escarbot, Leftynm, Nosbig, JAnDbot, Davidtwu, Bongwarrior, Corpeter~enwiki, User A1, Mythealias, CommonsDelinker, Leyo, Slugger, Huzzlet the bot, Davidr222, Landarski, Bigjoestalin, Stan J Klimas, Tarotcards, Hesam 8529022, VolkovBot, DSRH, Theosch, Malinaccier, Tsi43318, Riick, Nosferatütr, SieBot, Gerakibot, Man It’s So Loud In Here, Adamtester, Thekingofspain, Qmantoast, ClueBot, Razimantv, Mild Bill Hiccup, Turbojet, Vql, CarlosPatiño, Katanada, Khunglongcon, WikiDao, Prowikipedians, Addbot, Power.corrupts, Fieldday-sunday, EconoPhysicist, Ckk253, PranksterTurtle, Mean Free Path, Tide rolls, Zorrobot, Luckas-bot, Yobot, Fraggle81, Kipoc, Paranoidhuman, Materialscientist, Xqbot, Nickkid5, GrouchoBot, ChristopherKingChemist, RibotBOT, E0steven, SD5, BoomerAB, Chjoaygame, Nagoltastic, FrescoBot, FoxBot, சஞ்சீவி சிவகுமார், EmausBot, WikitanvirBot, Mrericsully, HiW-Bot, Kiwi128, AManWithNoPlan, Donner60, ClueBot NG, CocuBot, Movses-bot, Tr00rle, Kevinjasm, Piguy101, Brad7777,

12.1. TEXT

247

Aisteco, Uopchem25asdf, BeaumontTaz, YDelta, Mike666234, HiYahhFriend, MantleMeat, Trackteur, Macofe, Carlojoseph14, Alligator420, Mtthwknnd4 and Anonymous: 181 • Real gas Source: https://en.wikipedia.org/wiki/Real_gas?oldid=709413052 Contributors: Charles Matthews, Robbot, Giftlite, Brianjd, PAR, Velella, Jost Riedel, Rjwilmsi, Boccobrock, Dhollm, Tony1, Closedmouth, SmackBot, Colonies Chris, Anakata, Gogo Dodo, Raoul NK, Headbomb, Fayenatic london, Olaf, Stan J Klimas, Heero Kirashami, Vanished user 39948282, TXiKiBoT, Theosch, LeaveSleaves, Meters, Logan, Jpuppy, StaticGull, Marco zannotti, ClueBot, Ideal gas equation, Alexbot, Katanada, Crowsnest, Addbot, Power.corrupts, Download, LinkFA-Bot, 84user, Krano, Luckas-bot, Takuma-sa, Azylber, Sonia, Gumok, Omnipaedista, Shadowjams, BenzolBot, Pinethicket, MinkeyBuddy, MastiBot, Jauhienij, EmausBot, Klbrain, Dcirovic, ZéroBot, Zl1corvette, ClueBot NG, Jwchong, UAwiki, Ushakaron, Mn-imhotep, Sarah george mesiha, Marvin W. Hile, Zrephel, Jianhui67, VIKRAMGUPTAJI and Anonymous: 65 • Thermodynamic process Source: https://en.wikipedia.org/wiki/Thermodynamic_process?oldid=715721366 Contributors: Glenn, Giftlite, Andycjp, Karol Langner, Paul August, Alansohn, PAR, GangofOne, YurikBot, Bhny, Wavesmikey, Dhollm, Bota47, Jeh, SmackBot, MalafayaBot, Chlewbot, Lambiam, Karenjc, Thijs!bot, JAnDbot, R'n'B, Spshu, DorganBot, VolkovBot, ABF, Philip Trueman, Lechatjaune, Jackfork, AlleborgoBot, Natox, SieBot, Gerakibot, OKBot, FearChild, Cerireid, Addbot, Amirber, BepBot, Luckas-bot, Ptbotgourou, Choij, Daniele Pugliesi, ArthurBot, Erik9bot, Chjoaygame, Jauhienij, EmausBot, Mmeijeri, ClueBot NG, Pcarmour, Helpful Pixie Bot, J824h, BG19bot, F=q(E+v^B), Glacialfox, Prokaryotes, DavRosen, Quenhitran, Dhyannesh Dev, BadFaithEditor, Metlapalli sai kiran kanth, K Sikdar, Shrodinger X and Anonymous: 31 • Isobaric process Source: https://en.wikipedia.org/wiki/Isobaric_process?oldid=717073640 Contributors: Peterlin~enwiki, Ellywa, Glenn, AugPi, Wik, Robbot, Karol Langner, Discospinster, Rgdboer, Duk, Orzetto, Keenan Pepper, Margosbot~enwiki, YurikBot, Dhollm, Plober, SmackBot, Loodog, Pflatau, Sabate, Damouns, Thijs!bot, Gökhan, JAnDbot, JaGa, El Belga, VolkovBot, Lechatjaune, T0lk, Insanity Incarnate, Kbrose, SieBot, Mike2vil, WikiBotas, Hjlim, Auntof6, Crowsnest, MystBot, Addbot, Jncraton, PV=nRT, Luckas-bot, Yobot, TaBOT-zerem, Sanyi4, Rubinbot, GrouchoBot, Pyther, FrescoBot, ‫عبد المؤمن‬, LucienBOT, Simeon89, Pinethicket, Dance-a-day, TjBot, Ripchip Bot, EmausBot, WikitanvirBot, Carultch, ClueBot NG, Anagogist, AvocatoBot, BattyBot, IkamusumeFan, CarrieVS, Zziccardi, Ebag7125, Tyler.neysmith and Anonymous: 41 • Isochoric process Source: https://en.wikipedia.org/wiki/Isochoric_process?oldid=715485863 Contributors: Peterlin~enwiki, Ixfd64, Ellywa, Glenn, AugPi, Robbot, BenFrantzDale, Karol Langner, ArneBab, Rich Farmbrough, CDN99, DanielNuyu, Duk, Gene Nygaard, Knuckles, YurikBot, Dhollm, Bota47, StuRat, Plober, Mejor Los Indios, KocjoBot~enwiki, Ortho, A.Z., David Legrand, Mahlerite, ALittleSlow, Thijs!bot, Kerotan, Nyq, Freddyd945, JaGa, Ydw, Shoessss, VolkovBot, LokiClock, JhsBot, Nightkhaos, AlleborgoBot, Kbrose, SieBot, BotMultichill, Lara bran, ClueBot, Wikijens, DragonBot, MystBot, Addbot, Skyezx, Nachoj, PV=nRT, Zorrobot, Luckas-bot, Yobot, Sanyi4, Xqbot, GrouchoBot, Pyther, Erik9bot, OgreBot, RedBot, Thái Nhi, Jeffrd10, TjBot, Ifly6, Chuchung712, Voltaire169, ClueBot NG, IkamusumeFan, Ginsuloft, JJMC89 and Anonymous: 46 • Isothermal process Source: https://en.wikipedia.org/wiki/Isothermal_process?oldid=717175757 Contributors: Roadrunner, Peterlin~enwiki, Glenn, Cyan, AugPi, Dcoetzee, Robbot, HaeB, Karol Langner, Rich Farmbrough, Robotje, Duk, Dungodung, LOL, Shpoffo, Nneonneo, Gelo71, Yuta Aoki, Margosbot~enwiki, Chobot, Bgwhite, RussBot, Postglock, CambridgeBayWeather, Adamrush, Dhollm, Plober, SmackBot, David Shear, Mcduff, Coffin, Akriasas, Lambiam, Pflatau, Vanisaac, OlexiyO, Astrochemist, Mtpaley, John254, Kathovo, JAnDbot, JaGa, JCraw, R'n'B, Lechatjaune, Pedvi, !dea4u, Romeoracz, SieBot, Yintan, WikiBotas, EoGuy, DragonBot, Forbes72, MystBot, Addbot, Jncraton, Tide rolls, PV=nRT, Legobot, Luckas-bot, Ptbotgourou, Sanyi4, Nallimbot, Rtanz, Rubinbot, Jim1138, Xqbot, Trueravenfan, GrouchoBot, Erik9bot, Jwilson75503, Thái Nhi, EmausBot, Netheril96, A2soup, AManWithNoPlan, ClueBot NG, KrDa, AnkurBargotra, Uopchem0251, BattyBot, ChrisGualtieri, Dexbot, Namige, Evan585619, Rajawaseem6, Retired Pchem Prof, Eden-K121D and Anonymous: 95 • Adiabatic process Source: https://en.wikipedia.org/wiki/Adiabatic_process?oldid=717492687 Contributors: AxelBoldt, CYD, Bryan Derksen, AdamW, Andre Engels, JeLuF, Roadrunner, Peterlin~enwiki, Icarus~enwiki, Edward, Michael Hardy, Tim Starling, Glenn, AugPi, Hike395, Ec5618, Steinsky, Kaare, Grendelkhan, Phys, Raul654, Donarreiskoffer, Robbot, Chancemill, Sverdrup, Moink, Wereon, Enochlau, Giftlite, Mat-C, BenFrantzDale, Mboverload, Andycjp, Gunnar Larsson, Karol Langner, Klemen Kocjancic, Discospinster, Rich Farmbrough, Guanabot, Vsmith, Bender235, Evand, Gershwinrb, Bobo192, Kghose, Duk, Giraffedata, Jtalledo, PAR, BernardH, Count Iblis, Artur adib, Gene Nygaard, Dan100, Linas, SeventyThree, Palica, Rjwilmsi, JLM~enwiki, Ucucha, Chobot, DVdm, Bgwhite, Triku~enwiki, YurikBot, Hairy Dude, RussBot, Stassats, NawlinWiki, Dhollm, Mlouns, Tony1, Fsiler, Plober, Mejor Los Indios, SmackBot, Slashme, InverseHypercube, Giraldusfaber, The Gnome, Dauto, ThorinMuglindir, Bluebot, Kevinbevin9, Sbharris, Tschwenn, Smokefoot, Hgilbert, Dr. Crash, SashatoBot, Shrew, Loodog, KostasG, Breno, Mgiganteus1, NongBot~enwiki, Pflatau, Rm w a vu, Joe Frickin Friday, Tac2z, Phuzion, Tawkerbot2, Mika1h, W.F.Galway, Rracecarr, Thijs!bot, E. Ripley, Thljcl, Escarbot, Stannered, Mikenorton, TAnthony, MSBOT, Magioladitis, AuburnPilot, Aka042, Dirac66, User A1, Dbrunner, Pgriffin, AstroHurricane001, Choihei, Stan J Klimas, NewEnglandYankee, Molly-in-md, Balawd, Dhaluza, STBotD, Deor, VolkovBot, Kyle the bot, Plenumchamber~enwiki, Venny85, MajorHazard, Kbrose, David Straight, SieBot, Ivan Štambuk, Damorbel, VVVBot, Oxymoron83, Anchor Link Bot, Hamiltondaniel, Breeet, Dolphin51, Denisarona, ClueBot, IceUnshattered, Mild Bill Hiccup, Heathmoor, Alexbot, JLewis98856, Pcmproducts, Amaruca, Ecomesh, NevemTeve, Stefano Schiavon, ChrisHodgesUK, Mscript, Bannerts, Addbot, The Geologist, Alkonblanko, Masegado, Sarasknight, Lindert, EconoPhysicist, BepBot, AnnaFrance, Ginosbot, Emilio juanatey, Zorrobot, Ettrig, Legobot, Luckas-bot, Yobot, Tohd8BohaithuGh1, Sirsparksalot, Sanyi4, Synchronism, AnomieBOT, Ciphers, Xtreme219, Darkroll, Materialscientist, Xqbot, GrouchoBot, Sheeson, Chjoaygame, FrescoBot, Sapphirus, Jschnur, RedBot, Serols, FoxBot, Eracer55, TCarey, Tbhotch, RjwilmsiBot, EmausBot, John of Reading, DacodaNelson, Mobius Bot, Carultch, Donner60, Eg-T2g, ClueBot NG, Pvnuffel, KL56-NJITWILL, Clive.gregory, Rogerwillismillsii, Bibcode Bot, Alexgotsis, Bauka91 91, Royourboat, Lynskyder, YumOooze, Zedshort, Warrenrob50, Armasd, Dexbot, C5st4wr6ch, Coolitic, Destroyer130, Jodosma, Samgo27, Bcheah, Toyalima, JCMPC, Femkemilene, Monkbot, Krishtafar, Appleuseryu and Anonymous: 222 • Isenthalpic process Source: https://en.wikipedia.org/wiki/Isenthalpic_process?oldid=645636432 Contributors: Glenn, Karol Langner, Count Iblis, Gene Nygaard, NawlinWiki, Dhollm, Hirudo, Thorney¿?, SmackBot, Bduke, Xyabc, Rracecarr, Hasanpasha, StuartF, Stan J Klimas, Davecrosby uk, Dolphin51, Editor2020, MystBot, Addbot, LatitudeBot, Zorrobot, Amirobot, Citation bot, EmausBot, WikitanvirBot, ZéroBot, Rmashhadi, Helpful Pixie Bot, Titodutta, MusikAnimal, Monkbot and Anonymous: 10 • Isentropic process Source: https://en.wikipedia.org/wiki/Isentropic_process?oldid=708145999 Contributors: JeLuF, Michael Hardy, Kingturtle, Darkwind, Glenn, Richy, Duk, PAR, Jheald, Ling Kah Jai, Linas, YurikBot, Dhollm, Arthur Rubin, SmackBot, Ohconfucius, Pierre cb,

248

CHAPTER 12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

Dr.K., Mikiemike, Cydebot, Rracecarr, Thijs!bot, AntiVandalBot, QuiteUnusual, Ac44ck, Mythealias, Stan J Klimas, Liveste, Zojj, Billcarr178, VolkovBot, Jamelan, AlleborgoBot, Edbrambley, Tresiden, WereSpielChequers, Anchor Link Bot, Dolphin51, Seanzer, DumZiBoT, MystBot, Addbot, LatitudeBot, Zorrobot, Luckas-bot, Yobot, Ptbotgourou, Fraggle81, Doogleface, Daniele Pugliesi, Materialscientist, Daniel Souza, Taha-yasseri, Chjoaygame, ‫عبد المؤمن‬, HRoestBot, MastiBot, EmausBot, WikitanvirBot, Mmeijeri, Hhhippo, ChuispastonBot, ClueBot NG, Magneticmoment, Zedshort, Oldscouser, GoShow, Manul, Ahujamukesh007, Prymshbmg, Amortias, Tyler.neysmith, Vickyyugy, Scipsycho, Bubaff and Anonymous: 78 • Polytropic process Source: https://en.wikipedia.org/wiki/Polytropic_process?oldid=702584774 Contributors: AxelBoldt, Glenn, Mihail Vasiliev, Karol Langner, Nick Mks, SDC, Dhollm, Alain r, Plober, SmackBot, Pegua, Mikiemike, Gogo Dodo, JamesAM, Edokter, JAnDbot, JeffConrad, PhilKnight, Ac44ck, R'n'B, Jeffbadge, VolkovBot, DoorsAjar, Lechatjaune, AlleborgoBot, AllHailZeppelin, Alexbot, NellieBly, MTessier, Addbot, RN1970, PV=nRT, Luckas-bot, Sanyi4, JackieBot, Materialscientist, GrouchoBot, ‫قلی زادگان‬, Erik9bot, Tøpholm, Jzana, BG19bot, Zedshort, Cky2250, IkamusumeFan, YFdyh-bot, Marcello Pas, Danhatton, CasualJJ, Miller.alexb and Anonymous: 46 • Introduction to entropy Source: https://en.wikipedia.org/wiki/Introduction_to_entropy?oldid=711449081 Contributors: Edward, Kku, Tea2min, Dratman, Dave souza, Art LaPella, Army1987, Pharos, Gary, PAR, Jheald, Carcharoth, DaveApter, Vegaswikian, Fresheneesz, Loom91, Grafen, Retired username, Dhollm, Brisvegas, Light current, Serendipodous, User24, SmackBot, Giraldusfaber, Bduke, T.J. Crowder, Microfrost, Xyzzyplugh, Sadi Carnot, PAS, Lazylaces, JorisvS, 16@r, Kirbytime, K, FilipeS, Cydebot, Gtxfrance, Headbomb, John254, MarshBot, Dylan Lake, Ray Eston Smith Jr, Hypergeek14, Dirac66, BigrTex, Davidm617617, TJKluegel, Papparolf, Adam C C, Davwillev, Zain Ebrahim111, Sesshomaru, Kbrose, ConfuciusOrnis, Dolphin51, Ac1201, Rodhullandemu, Plastikspork, Crowsnest, Yobot, Zaereth, DBBabyboydavey, Kissnmakeup, AnomieBOT, Daniele Pugliesi, Ipatrol, EryZ, Danno uk, Citation bot, LilHelpa, DanP4522874, Chjoaygame, FrescoBot, Vh mby, Nilock, Vrenator, Combee123, Sixtylarge2000, Drozdyuk, Wayne Slam, ClueBot NG, BG19bot, Michelino12, Marko Petek, Gsoverby, Prokaryotes, W. P. Uzer, Yikkayaya, Hwmoon90, Patrickrowanandrews, DomFerreira01 and Anonymous: 72 • Entropy Source: https://en.wikipedia.org/wiki/Entropy?oldid=717502595 Contributors: Tobias Hoevekamp, Chenyu, CYD, Bryan Derksen, Zundark, The Anome, BlckKnght, Awaterl, XJaM, Roadrunner, Peterlin~enwiki, Jdpipe, Heron, Youandme, Olivier, Stevertigo, PhilipMW, Michael Hardy, Macvienna, Zeno Gantner, Looxix~enwiki, J'raxis, Humanoid, Darkwind, AugPi, Jiang, Kaihsu, Jani~enwiki, Mxn, Smack, Disdero, Tantalate, Timwi, Reddi, Terse, Dysprosia, Jitse Niesen, Andrewman327, Piolinfax, Tpbradbury, Saltine, J D, Atuin, Raul654, Wetman, Lumos3, Jni, Phil Boswell, Ruudje, Robbot, Fredrik, Alrasheedan, Naddy, Sverdrup, Texture, Hadal, David Edgar, Ianml, Aetheling, Tea2min, Connelly, Paisley, Giftlite, Graeme Bartlett, DavidCary, Haeleth, BenFrantzDale, Lee J Haywood, Herbee, Xerxes314, Everyking, Anville, Dratman, Henry Flower, NotableException, Gracefool, Macrakis, Christofurio, Zeimusu, Yath, Gunnar Larsson, Karol Langner, JimWae, Mjs, H Padleckas, Pmanderson, Icairns, Arcturus, Tsemii, Edsanville, E David Moyer, Mschlindwein, Freakofnurture, Lone Isle, Rich Farmbrough, KillerChihuahua, Pjacobi, Vsmith, Dave souza, Gianluigi, Mani1, Paul August, Bender235, Kbh3rd, Kjoonlee, Geoking66, RJHall, Pt, El C, Laurascudder, Aaronbrick, Chuayw2000, Bobo192, Marathoner, Wisdom89, Giraffedata, VBGFscJUn3, Physicistjedi, 99of9, Obradovic Goran, Haham hanuka, Mdd, Geschichte, Gary, Mennato, Arthena, Keenan Pepper, Benjah-bmm27, Riana, Iris lorain, PAR, Melaen, Velella, Knowledge Seeker, Jheald, Count Iblis, Drat, Egg, Artur adib, Lerdsuwa, Gene Nygaard, Oleg Alexandrov, Omnist, Sandwiches, Joriki, Velho, Simetrical, MartinSpacek, Woohookitty, Linas, TigerShark, StradivariusTV, Jacobolus, Wijnand, EnSamulili, Pkeck, Mouvement, Jwanders, Eleassar777, Tygar, SeventyThree, Jonathan48, DL5MDA, Aarghdvaark, Graham87, Marskell, V8rik, Nanite, Rjwilmsi, Thechamelon, HappyCamper, Ligulem, TheIncredibleEdibleOompaLoompa, Dougluce, MarnetteD, GregAsche, FlaBot, RobertG, Mathbot, Nihiltres, Gurch, Frelke, Intgr, Fresheneesz, Srleffler, Physchim62, WhyBeNormal, Chobot, DVdm, VolatileChemical, YurikBot, Wavelength, Jimp, Alpt, Kafziel, Wolfmankurd, Bobby1011, Loom91, Bhny, JabberWok, Stephenb, Gaius Cornelius, Wimt, Ugur Basak, Odysses, Shanel, NawlinWiki, SAE1962, Sitearm, Retired username, Dhollm, Ellwyz, Crasshopper, Shotgunlee, Dr. Ebola, DeadEyeArrow, Bota47, Rayc, Brisvegas, Doetoe, Ms2ger, WAS 4.250, Vadept, Light current, Enormousdude, Theodolite, Ballchef, The Fish, ChrisGriswold, Theda, CharlesHBennett, Chaiken, Paganpan, Bo Jacoby, Pentasyllabic, Pipifax, DVD R W, ChemGardener, Itub, Attilios, Otheus, SmackBot, ElectricRay, Reedy, InverseHypercube, KnowledgeOfSelf, Jim62sch, David Shear, Mscuthbert, Ixtli, Jab843, Pedrose, Edgar181, Xaosflux, Hmains, Betacommand, Skizzik, ThorinMuglindir, Kmarinas86, Oneismany, Master Jay, Kurykh, QTCaptain, Bduke, Dreg743, Complexica, Imaginaryoctopus, Basalisk, Nbarth, Sciyoshi~enwiki, Dlenmn, Colonies Chris, Darth Panda, Chrislewis.au, BW95, Zachorious, Can't sleep, clown will eat me, Ajaxkroon, ZezzaMTE, Apostolos Margaritis, Shunpiker, Homestarmy, AltheaJ, Ddon, Memming, Engwar, Nakon, [email protected], G716, LoveMonkey, Metamagician3000, Sadi Carnot, Yevgeny Kats, SashatoBot, Tsiehta, Lambiam, AThing, Oenus, Eric Hawthorne, MagnaMopus, Lakinekaki, Mbeychok, JorisvS, Mgiganteus1, Nonsuch, Dftb, Physis, Slakr, Dicklyon, Tiogalinha~enwiki, Abjad, Dr.K., Cbuckley, HappyVR, Adodge, BranStark, HisSpaceResearch, K, Astrobayes, Paul venter, Gmaster108, RekishiEJ, Jive Dadson, JRSpriggs, Emote, Patrickwooldridge, Vaughan Pratt, CmdrObot, Hanspi, Jsd, Dgw, BassBone, Omnichic82, Electricmic, NE Ent, Adhanali, FilipeS, Jac16888, Cydebot, Natasha2006, Kanags, WillowW, Gtxfrance, Mike Christie, Rifleman 82, Gogo Dodo, Sam Staton, Hkyriazi, Rracecarr, Miguel de Servet, Michael C Price, Rize Above, Soumya.92, Aintsemic, Hugozam, Gurudev23, Csdidier, Abtract, Yian, Thijs!bot, Epbr123, Lg king, Opabinia regalis, Moveaway00, LeBofSportif, Teh tennisman, Kahriman~enwiki, Fred t hamster, Headbomb, Neligterink, Esowteric, Electron9, EdJohnston, D.H, Dartbanks, DJ Creature, Stannered, Seaphoto, FrankLambert, Ray Eston Smith Jr, Tim Shuba, MECU, Astavats, Serpent’s Choice, JAnDbot, MER-C, Reallybored999, Physical Chemist, XerebZ, RebelRobot, Magioladitis, Bongwarrior, VoABot II, Avjoska, Bargebum, Tonyfaull, HGHSTROJAN, Dirac66, User A1, Jacobko, Glen, Steevven1, DGG, Hdt83, GuidoGer, Keith D, Ronburk, Pbroks13, Leyo, Mbweissman, Mausy5043, HEL, J.delanoy, Captain panda, Jorgenumata, Numbo3, Peter Chastain, Josterhage, Maurice Carbonaro, Thermbal, Shawn in Montreal, Camarks, Cmbreuel, Nwbeeson, Touch Of Light, Constatin666999, Pundit, Edzevallos, Juliancolton, Linshukun, DorganBot, Rising*From*Ashes, Inwind, Lseixas, Izno, Idioma-bot, Fimbulfamb, Cuzkatzimhut, Ballhausflip, Larryisgood, Macedonian, Pasquale.Carelli, LokiClock, Philip Trueman, Nikhil Sanjay Bapat, TXiKiBoT, BJNartowt, Antoni Barau, Rei-bot, Anonymous Dissident, Drestros power, Hai2410, Vendrov, Leafyplant, Raymondwinn, Billgdiaz, Mwilso24, Kpedersen1, Mouse is back, Koen Van de moortel~enwiki, UffeHThygesen, Synthebot, Sesshomaru, Locke9k, Arcfrk, Nagy, Tennismaniac2112, Bojack727, Katzmik, EmxBot, Vbrayne, Kbrose, SieBot, Wolf.312, Moonriddengirl, Paradoctor, Gerakibot, Vanished user 82345ijgeke4tg, Arjun r acharya, Happysailor, Radon210, AngelOfSadness, LidiaFourdraine, Georgette2, Hamiltondaniel, WikiLaurent, Geoff Plourde, Mad540trix, Dolphin51, Emansf, ClueBot, Compdude47, Foxj, Yurko~enwiki, The Thing That Should Not Be, Ciacco, Plastikspork, Dtguelph, Riskdoc, Drmies, Bbanerje, ILikeMIDI, Josemald, Lbertolotti, DragonBot, Djr32, Awickert, Graphitepalms, PhySusie, Tnxman307, M.O.X, Wingwongdong, Revotfel, SchreiberBike, Galor612, Versus22, Edkarpov, Passwordwas1234, DumZiBoT, TimothyRias, Tuuky, XLinkBot, Gnowor, Superkan619, BodhisattvaBot, Boob12, Ost316, Quidproquo2004, Gonfer, MilesTerrex, Subversive.sound, Private Pilot, WikiDao, Aunt Entropy, NCDane, JohnBonham69, Debzer, Phidus, Addbot, Eric Drexler, Tanhabot, Favonian, Ruddy9hell, Causticorulos, Mean Free Path, Dougbateman, Tide rolls, Suz115, Gatewayofintrigue, Gail, Legobot, Luckas-bot, Yobot, Zaereth, WikiDan61,

12.1. TEXT

249

Ht686rg90, Legobot II, Kissnmakeup, JHoffmueller~enwiki, AnomieBOT, Cantanchorus, IRP, Galoubet, Piano non troppo, Materialscientist, Citation bot, ArthurBot, DirlBot, Branxton, FreeRangeFrog, Xqbot, Engineering Guy, Addihockey10, Jeffrey Mall, DSisyphBot, Necron909, Raffamaiden, Srich32977, Almabot, Munozdj, Schwijker, GrouchoBot, Tnf37, Ute in DC, Philip2357, Omnipaedista, RibotBOT, Waleswatcher, Smallman12q, Garethb1961, Mishka.medvezhonok, Chjoaygame, GT5162, Maghemite, C1t1v151on, Theowoo, Craig Pemberton, BenzolBot, Kwiki, Vh mby, MorphismOfDoom, DrilBot, Pinethicket, I dream of horses, HRoestBot, Marsiancba, Martinvl, Calmer Waters, Jschnur, RedBot, Tcnuk, Nora lives, SkyMachine, IVAN3MAN, Nobleness of Mind, Quantumechanic, TobeBot, Jschissel, Lotje, DLMcN, Dinamik-bot, Vrenator, Lordloihi, Bookbuddi, Rr parker, Stroppolo, Gegege13, DARTH SIDIOUS 2, Mean as custard, Woogee, Dick Chu, Regancy42, Drpriver, Massieu, Prasadmalladi, EmausBot, John of Reading, Lea phys, 12seda78, 478jjjz, Heoigi, Netheril96, Dcirovic, K6ka, Serketan, Capcom1116, Oceans and oceans, Akhil 0950, JSquish, Fæ, Mkratz, Lateg, Ὁ οἶστρος, Cobaltcigs, Quondum, Glockenklang1, Parodi, Music Sorter, Pachyphytum, Schurasbrat, Zueignung, Carmichael, RockMagnetist, Tritchls, GP modernus, DASHBotAV, ResearchRave, Mikhail Ryazanov, Debu334, ClueBot NG, Tschijnmotschau, Intoronto1125, Chester Markel, Marechal Ney, Widr, Natron25, Amircrypto, Helpful Pixie Bot, Art and Muscle, Jack sherrod, Ramaksoud2000, Bibcode Bot, BZTMPS, Jeffscott007, Scyllagist, Bths83Cu87Aiu06, Juro2351, Paolo Lipparini, DIA-888, FutureTrillionaire, Zedshort, Cky2250, Uopchem2510, Uopchem2517, Millennium bug, Justincheng12345-bot, Bobcorn123321, LEBOLTZMANN2, Smileguy91, Toni 001, ChrisGualtieri, Layzeeboi, Adwaele, JYBot, APerson, AlecTaylor, Thinkadoodle, Webclient101, Mogism, Makecat-bot, Jiejie9988, CuriousMind01, Sfzh, Ssteve90266, KingQueenPrince, Blue3snail, Thearchontect, Spetalnick, Rjg83, Curatrice, Random Dude Who Is Cool, Sajjadha, Mattia Guerri, Probeb217, Loverthehater, TheNyleve, Rkswb, Prokaryotes, DavRosen, Damián A. Fernández Beanato, Bruce Chen 0010334, Jianhui67, W. P. Uzer, PhoenixPub, Technoalpha, ProKro, Anrnusna, Saad bin zubair, QuantumMatt101, Dragonlord Jack, Elenceq, Monkbot, Yikkayaya, Eczanne, Lamera1234, TaeYunPark, ClockWork96, Georgeciobanu, Gbkrishnappa2015, Eliodorochia, KasparBot, Asterixf2, Gaeanautes, Ericliu shu, Miller.alexb, Tanmay pathak987654, TomKaufmann869, Spinrade, Stemwinders, Ssmmachen, Samuelchuuu, PhyKBA, JosiahWilard, WandaLan, Sir.Arjit Chauhan and Anonymous: 807 • Pressure Source: https://en.wikipedia.org/wiki/Pressure?oldid=717281106 Contributors: AxelBoldt, Magnus Manske, Mav, Bryan Derksen, Zundark, The Anome, Tarquin, Cable Hills, Peterlin~enwiki, DavidLevinson, Jdpipe, Heron, Patrick, Infrogmation, Smelialichu, Michael Hardy, Tim Starling, Pit~enwiki, Fuzzie, GTBacchus, Delirium, Minesweeper, Egil, Mkweise, Ellywa, Ahoerstemeier, Mac, Александър, Glenn, Smack, GRAHAMUK, Halfdan, Ehn, Emperorbma, RodC, Charles Matthews, Jay, Pheon, DJ Clayworth, Tpbradbury, Jimbreed, Omegatron, Fvw, Robbot, Hankwang, Pigsonthewing, Chris 73, R3m0t, Peak, Merovingian, Bkell, Moink, Hadal, UtherSRG, Aetheling, Cronian~enwiki, Tea2min, Giftlite, Smjg, Harp, Wolfkeeper, Tom harrison, Herbee, Mark.murphy, Wwoods, Michael Devore, Bensaccount, Thierryc, Jackol, Simian, Gadfium, Lst27, Anoopm, Ackerleytng, Jossi, DragonflySixtyseven, Johnflux, Icairns, Zfr, Sam Hocevar, Lindberg G Williams Jr, Urhixidur, Peter bertok, Sonett72, Rich Farmbrough, Guanabot, Vsmith, Sam Derbyshire, Mani1, Paul August, MarkS, SpookyMulder, LemRobotry, Calair, Pmcm, Lankiveil, Joanjoc~enwiki, Shanes, Sietse Snel, RoyBoy, Spoon!, Bobo192, Marco Polo, Fir0002, Meggar, Duk, LeonardoGregianin, Evgeny, Foobaz, Dungodung, La goutte de pluie, Unused000701, MPerel, Hooperbloob, Musiphil, Alansohn, Brosen~enwiki, Dbeardsl, Jeltz, Goldom, Kotasik, Katana, PAR, Malo, Snowolf, Velella, Ish ishwar, Shoefly, Gene Nygaard, ZakuSage, Oleg Alexandrov, Reinoutr, Armando, Pol098, Commander Keane, Keta, Wocky, Isnow, Crucis, Gimboid13, Palica, FreplySpang, NebY, Koavf, Isaac Rabinovitch, RayC, Tawker, Daano15, Yamamoto Ichiro, FlaBot, Gurch, AlexCovarrubias, Takometer, Yggdrasilsroot, Srleffler, Ahunt, Chobot, DVdm, YurikBot, Zaidpjd~enwiki, Jimp, Spaully, Ytrottier, SpuriousQ, Stephenb, Gaius Cornelius, Yyy, Alex Bakharev, Bovineone, Wimt, NawlinWiki, Wiki alf, Test-tools~enwiki, Kdkeller, Dhollm, Moe Epsilon, Alex43223, JHCaufield, Scottfisher, Deeday-UK, FF2010, Light current, Johndburger, Redgolpe, HereToHelp, Tonyho, RG2, Profero, NeilN, ChemGardener, SmackBot, RDBury, Blue520, KocjoBot~enwiki, Jrockley, Gilliam, Skizzik, Jamie C, Bluebot, Audacity, NCurse, MK8, Oli Filth, MalafayaBot, SchfiftyThree, Complexica, Kourd, DHN-bot~enwiki, Colonies Chris, Zven, Suicidalhamster, Can't sleep, clown will eat me, DHeyward, Fiziker, JonHarder, Yidisheryid, Fuhghettaboutit, Tvaughn05, Bowlhover, Nakon, Kntrabssi, Dreadstar, Smokefoot, Drphilharmonic, Sadi Carnot, FelisLeo, Cookie90, SashatoBot, Finejon, Dbtfz, Gobonobo, Middlec, Tktktk, Mbeychok, BLUE, Chodorkovskiy, Pflatau, MarkSutton, Willy turner, Waggers, Peter Horn, Hgrobe, Shoeofdeath, Wjejskenewr, CharlesM, Courcelles, Tawkerbot2, Bstepp99, Petr Matas, Zakian49, Fnfal, WeggeBot, Gerhardt m, Cydebot, Fnlayson, Gogo Dodo, Rracecarr, Dancter, Odie5533, AndersFeder, Bookgrrl, Karuna8, Epbr123, Bot-maru, LeBofSportif, Headbomb, Marek69, Iviney, Greg L, Oreo Priest, Porqin, AntiVandalBot, Garbagecansrule, Opelio, Credema, Adz 619, B7582, JAnDbot, Hemingrubbish, MER-C, Nthep, Marsey04, Hello32020, Andonic, Easchiff, Magioladitis, Bongwarrior, VoABot II, JNW, Rivertorch, Midgrid, Dirac66, Chris G, DerHexer, Waninge, Yellowing, Mania112, Ashishbhatnagar72, Wikianon, Seba5618, MartinBot, Rob0571, LedgendGamer, J.delanoy, Trusilver, Piercetheorganist, Mike.lifeguard, Gzkn, Lantonov, Salih, Mikael Häggström, Yadevol, Warut, Belovedfreak, Cmichael, Fylwind, SlightlyMad, M bastow, TraceyR, Idioma-bot, VolkovBot, Trebacz, Martin Cole, Philip Trueman, Dbooksta, TXiKiBoT, Oshwah, Zidonuke, Malinaccier, Ranmamaru, Hqb, JayC, Qxz, Anna Lincoln, Jetforme, Martin451, From-cary, Zondi, Greg searle, Krushia, Vincent Grosskopf, Neparis, Admkushwaha, EJF, SieBot, Coffee, Tresiden, Caltas, Arda Xi, AlonCoret, Flyer22 Reborn, Tiptoety, Antzervos, Oxymoron83, Sr4delta, Lightmouse, The Valid One, OKBot, Vituzzu, StaticGull, Anchor Link Bot, TheGreatMango, Geoff Plourde, Dolphin51, Denisarona, Xjwiki, Faithlessthewonderboy, Codyfinke6, ClueBot, LAX, The Thing That Should Not Be, Uxorion, Jan1nad, Smichr, Drmies, Mild Bill Hiccup, Wolvereness, Orthoepy, Liempt, DragonBot, Djr32, Excirial, SubstanceDx99, Joa po, Nigelleelee, Lartoven, Sun Creator, L1f07bscs0035, JamieS93, Razorflame, Plasmic Physics, Versus22, SoxBot III, Uri2~enwiki, Rvoorhees, Antti29, XLinkBot, BodhisattvaBot, FactChecker1199, TZGreat, Gotta catch 'em all yo, Gonfer, Fzxboy, WikiDao, Jpfru2, Addbot, AVand, Some jerk on the Internet, Vanished user kksudfijekkdfjlrd, Betterusername, Sir cumalot, Seán Travers, Ronhjones, Fieldday-sunday, Adrian147, CanadianLinuxUser, Fluffernutter, Morning277, Glane23, Favonian, Jasper Deng, 84user, Tide rolls, Lightbot, Cesiumfrog, Ralf Roletschek, Superboy112233, HerculeBot, Snaily, Legobot, Luckas-bot, Yobot, Ht686rg90, AnomieBOT, DemocraticLuntz, Daniele Pugliesi, Sfaefaol, Jim1138, AdjustShift, Rudolf.hellmuth, Kingpin13, Nyanhtoo, Flewis, Bluerasberry, Materialscientist, Felyza, GB fan, Jemandwicca, Xqbot, Transity, .45Colt, Jeffrey Mall, Wyklety, Gap9551, Time501, GrouchoBot, Derintelligente, ChristopherKingChemist, Mathonius, Energybender, Shadowjams, Keo Ross Sangster, Aaron Kauppi, SD5, Imveracious, BoomerAB, GliderMaven, Pascaldulieu, FrescoBot, LucienBOT, Tlork Thunderhead, BenzolBot, Jamesooders, Haein45, Pinethicket, HRoestBot, Calmer Waters, Hamtechperson, Jschnur, RedBot, Marcmarroquin, Pbsouthwood, Jujutacular, Bgpaulus, Jonkerz, Navidh.ahmed, Vrenator, Darsie42, Jeffrd10, DARTH SIDIOUS 2, Onel5969, Mean as custard, DRAGON BOOSTER, Newty23125, William Shi, EmausBot, Tommy2010, Wikipelli, Dcirovic, K6ka, Thecheesykid, JSquish, Shuipzv3, Empty Buffer, Hazard-SJ, Quondum, Talyor Will, Morgankevinj, Perseus, Son of Zeus, Tls60, Orange Suede Sofa, RockMagnetist, DASHBotAV, 28bot, ClueBot NG, Jack Greenmaven, Mythicism, This lousy T-shirt, Neeraj1997, Cj005257, Frietjes, Jessica-NJITWILL, Braincricket, Angelo Michael, Widr, Christ1013, Rectangle546, Becarlson, Analwarrior, Wiki13, ElphiBot, Joydeep, Saurabhbaptista, Franz99, YVSREDDY, Cky2250, Matt Hayter, Shikhar1089, , Kasamasa, Anujjjj, Mrt3366, Jack No1, Shyncat, Avengingbandit, Forcez, JYBot, Librscorp, Mysterious Whisper, Superduck463, Frosty, Sriharsh1234, The Anonymouse, Reatlas, Resolution3.464, Paikrishnan, Masterbait123, Jasualcomni, DavidLeighEllis, Montyv, FizykLJF, Wyn.junior, Mahusha,

250

CHAPTER 12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

Trackteur, Johnnprince203, Bog snorkeller, Crystallizedcarbon, Jokeop, Alicemitchellweddingpressure, Esquivalience, Engmeas, KasparBot, JJMC89, Christofferekman, Vishrut Malik, Sharasque, Harmon758, Aditi Tripathi09, Gulercetin, BlueUndigo13, Mizaan Shamaun and Anonymous: 766 • Thermodynamic temperature Source: https://en.wikipedia.org/wiki/Thermodynamic_temperature?oldid=713032171 Contributors: AxelBoldt, The Anome, AdamW, Roadrunner, Baffclan, Lumos3, Robbot, Romanm, Cutler, Giftlite, Lethe, Eequor, Jaan513, Rich Farmbrough, Pjacobi, Xezbeth, RJHall, Evolauxia, Giraffedata, Nk, Keenan Pepper, Ricky81682, PAR, Velella, Skatebiker, Gene Nygaard, Blaxthos, Woohookitty, Rparson, Benbest, Pol098, Emerson7, DePiep, Nanite, Koavf, Erebus555, Gurch, Kri, Spacepotato, Loom91, CambridgeBayWeather, Trovatore, Dhollm, BOT-Superzerocool, Enormousdude, Pifvyubjwm, Smurrayinchester, Katieh5584, Teply, Sbyrnes321, SmackBot, David Shear, Pedrose, Ephraim33, Chris the speller, Bluebot, Thumperward, Sadads, Sbharris, Henning Makholm, Mion, Sadi Carnot, Schnazola, Breno, JoseREMY, Mgiganteus1, Nonsuch, Collect, Frokor, JRSpriggs, Kylu, Rifleman 82, Thijs!bot, LeBofSportif, Headbomb, Greg L, Braindrain0000, JAnDbot, Poga, WikipedianProlific, Limtohhan, Ashishbhatnagar72, DinoBot, Laura1822, CommonsDelinker, Leyo, Mpk138, ARTE, DorganBot, Skarnani, VolkovBot, Jeff G., Hqb, Geometry guy, Wiae, Kbrose, SieBot, Damorbel, VVVBot, Chromaticity, Lightmouse, Anchor Link Bot, JL-Bot, ImageRemovalBot, ClueBot, ChandlerMapBot, 718 Bot, Estirabot, Sun Creator, Frostus, Addbot, MrOllie, Lightbot, CountryBot, Yobot, AnomieBOT, Daniele Pugliesi, Materialscientist, YBG, Kithira, GrouchoBot, Shirik, Amaury, Vivekakulharia, Dave3457, Chjoaygame, Jatosado, FrescoBot, Simuliid, CheesyBiscuit, Glider87, Pinethicket, ‫عباد مجاهد ديرانية‬, JokerXtreme, Aleitner, Bearycool, EmausBot, Super48paul, KHamsun, Netheril96, Sibom, Dondervogel 2, BrokenAnchorBot, Donner60, Frangojohnson, ChuispastonBot, ClueBot NG, Gareth Griffith-Jones, Matthiaspaul, Snotbot, Frietjes, Jeremy W Powell, BG19bot, Entton1, Bauka91 91, Kisokj, Cyberbot II, YFdyhbot, Ugog Nizdast, Wikifan2744, Bubba58, Johnny Cook12345678987 and Anonymous: 85 • Volume (thermodynamics) Source: https://en.wikipedia.org/wiki/Volume_(thermodynamics)?oldid=690570181 Contributors: Gene Nygaard, Physchim62, Dhollm, Gilliam, Dreadstar, Md2perpe, Cydebot, Mikael Häggström, Lightmouse, Ktr101, Clayt85, MystBot, Addbot, Lightbot, Yobot, Ptbotgourou, Daniele Pugliesi, Miracleworker5263, ‫قلی زادگان‬, Louperibot, Trappist the monk, EmausBot, ZéroBot, Cobaltcigs, ClueBot NG, Muon, BG19bot, Dbrawner, Cky2250, Acratta, Blackbombchu, TeaLover1996 and Anonymous: 15 • Thermodynamic system Source: https://en.wikipedia.org/wiki/Thermodynamic_system?oldid=710216276 Contributors: Toby Bartels, Fxmastermind, Eric119, Stevenj, Smack, Filemon, Giftlite, Peruvianllama, Andycjp, Blazotron, Rdsmith4, Icairns, Jfraser, Helix84, Mdd, Alansohn, Rw63phi, PAR, Pion, Jheald, Dan100, BD2412, Chobot, Wavesmikey, Gaius Cornelius, Dhollm, Jpbowen, Bota47, Light current, E Wing, SmackBot, Bomac, MalafayaBot, Stepho-wrs, DinosaursLoveExistence, Sadi Carnot, 16@r, CmdrObot, Cydebot, Krauss, Sting, Headbomb, Stannered, Akradecki, JAnDbot, Athkalani~enwiki, MSBOT, .anacondabot, VoABot II, Rich257, KConWiki, Dirac66, An1MuS, Ac44ck, Pbroks13, Nev1, Trusilver, Maurice Carbonaro, Cmbankester, Usp, VolkovBot, Kbrose, PaddyLeahy, SieBot, Mercenario97, OKBot, ClueBot, Auntof6, Excirial, PixelBot, Wdford, Mikaey, SchreiberBike, Addbot, CarsracBot, Redheylin, Glane23, Ht686rg90, Fraggle81, Becky Sayles, AnomieBOT, Materialscientist, ArthurBot, Xqbot, DSisyphBot, GrouchoBot, Chjoaygame, FrescoBot, Pshmell, Pinethicket, RedBot, Thinking of England, Artem Korzhimanov, AznFiddl3r, EmausBot, Abpk62, Glockenklang1, ClueBot NG, Gokulchandola, Loopy48, BZTMPS, BG19bot, Gryffon5147, Tutelary, ChrisGualtieri, Upsidedowntophat, Adwaele, Frosty, PhoenixPub, Eclipsis Proteo, Zortwort, Klaus SchmidtRohr and Anonymous: 75 • Heat capacity Source: https://en.wikipedia.org/wiki/Heat_capacity?oldid=715890769 Contributors: Heron, Edward, Patrick, Michael Hardy, Ppareit, Looxix~enwiki, Ellywa, Julesd, Glenn, Samw, Tantalate, Krithin, Smallcog, Schusch, Romanm, Modulatum, Sverdrup, Giftlite, BenFrantzDale, Bensaccount, Jason Quinn, Bobblewik, ThePhantom, Karol Langner, Icairns, Gscshoyru, Tsemii, Edsanville, Vsmith, Xezbeth, Nabla, Joanjoc~enwiki, Kwamikagami, RAM, Jung dalglish, Pearle, I-hunter, Yhr, Mc6809e, PAR, Jheald, Gene Nygaard, Ian Moody, Kelly Martin, Pol098, Palica, Marudubshinki, Rjwilmsi, FlaBot, Margosbot~enwiki, Yrfeloran, Chobot, DVdm, YurikBot, Wavelength, RobotE, Jimp, RussBot, Madkayaker, Gaius Cornelius, Grafen, Trovatore, Dhollm, Voidxor, E2mb0t~enwiki, Poppy, JPushkarH, Mumuwenwu, SDS, GrinBot~enwiki, Bo Jacoby, Tom Morris, Hansonrstolaf, Edgar181, Skizzik, ThorinMuglindir, Chris the speller, Complexica, Sbharris, Sct72, John, JorisvS, CaptainVindaloo, Spiel496, MTSbot~enwiki, Iridescent, V111P, The Letter J, Vaughan Pratt, CmdrObot, Shorespirit, Quarkboard, Myasuda, Cydebot, Christian75, Mikewax, Thijs!bot, Memty Bot, Andyjsmith, Marek69, Greg L, Vincent88~enwiki, Ste4k, JAnDbot, BenB4, Magioladitis, Riceplaytexas, Engineman, Chemical Engineer, Dirac66, Mythealias, Anaxial, Alro, R'n'B, Leyo, Mausy5043, Thermbal, Brien Clark, Notreallydavid, NewEnglandYankee, RayForma, Ojovan, Brvman, AlnoktaBOT, TheOtherJesse, 8thstar, Philip Trueman, Oshwah, Aymatth2, Meters, Demize, Kbrose, JDHeinzmann, Damorbel, BotMultichill, Cwkmail, Revent, Flyer22 Reborn, Allmightyduck, Anchor Link Bot, Dolphin51, Denisarona, Elassint, ClueBot, Bbanerje, Auntof6, Dh78~enwiki, Djr32, KyuubiSeal, Rathemis, Peacheshead, Johnuniq, TimothyRias, Forbes72, WikHead, NellieBly, Alberisch~enwiki, Gniemeyer, Addbot, Boomur, CanadianLinuxUser, Keds0, Snaily, Yobot, AnomieBOT, DemocraticLuntz, Rubinbot, Daniele Pugliesi, Materialscientist, Citation bot, Eumolpo, Ulf Heinsohn, Chthonicdaemon, GrouchoBot, Ccmwiki~enwiki, Tufor, A. di M., Thehelpfulbot, Khakiandmauve, Chjoaygame, Banak, Italianice84, Bergdohle, Mfwitten, Cannolis, Citation bot 1, Maggyero, Chenopodiaceous, Pinethicket, I dream of horses, Dheknesn, Mogren, Dtrx, Sbembenek18, Thái Nhi, Soeren.b.c, Minimac, J36miles, EmausBot, John of Reading, Ajraddatz, Tpudlik, Dewritech, Gowtham vmj, Onegumas, Wikipelli, K6ka, Hhhippo, Ronk01, Offsure, Quondum, Mmww123, AManWithNoPlan, Wayne Slam, Hpubliclibrary, Donner60, ChuispastonBot, RockMagnetist, 28bot, Pulsfordp, ClueBot NG, Cwmhiraeth, Ulflund, School of Stone, Physics is all gnomes, The Master of Mayhem, O.Koslowski, Rezabot, Danim, MerlIwBot, ImminentFate, Magneticmoment, Helpful Pixie Bot, Lolm8, Calabe1992, Bibcode Bot, ElZarco, BG19bot, Yafjj215, AvocatoBot, Ushakaron, Tcep, Jschmalzel, Saiprasadrm, Zedshort, Physicsch, Martkat08, MathewTownsend, Anthonymcnug, BattyBot, David.moreno72, VijayGargUA, Cyberbot II, Ytic nam, Heithm, LHcheM, Adwaele, JYBot, Webclient101, Yauran, Makecat-bot, Sarah george mesiha, Zmicier P., Mgibby5, Reatlas, Joeinwiki, C5st4wr6ch, Epicgenius, Luke arnold16, Akiaterry, AresLiam, Kogge, Newestcastleman, JCMPC, Kernkkk, Meumeul, Amortias, Baharmajorana, Mario Castelán Castro, Fleivium, TaeYunPark, LfSeoane, Thizzlehatter, Zppix, Cyrej, Scipsycho, Nickabernethy, Sweepy, TheOldOne1939, Clinton Kepler and Anonymous: 312 • Compressibility Source: https://en.wikipedia.org/wiki/Compressibility?oldid=711876248 Contributors: Maury Markowitz, Michael Hardy, Aarchiba, Moriori, Chris Roy, Mor~enwiki, Mintleaf~enwiki, BenFrantzDale, Leonard G., Pne, Sam Hocevar, HasharBot~enwiki, AMR, PAR, Count Iblis, Gene Nygaard, GregorB, Rjwilmsi, Cryonic Mammoth, Deklund, RobotE, RussBot, Twin Bird, Dhollm, Valeriecoffman, HeartofaDog, Commander Keane bot, Rpspeck, Powerfool, COMPFUNK2, Wiz9999, Mwtoews, John, Iepeulas, Lenoxus, Pacerlaser, Courcelles, Covalent, Novous, TheTito, Basar, Thijs!bot, Headbomb, JustAGal, EarthPerson, JAnDbot, Tigga, Ibjt4ever, Magioladitis, Ehdr, Msd3k, Red Sunset, R'n'B, Deans-nl, Zygimantus, Uncle Dick, KudzuVine, Sandman619, CWii, YuryKirienko, Wiae, Andy Dingley, Gerakibot, Ra'ike, Algorithms, ClueBot, Binksternet, Tzm41, Crowsnest, Freireib, Addbot, DOI bot, TStein, Mpfiz, Alfie66, Luckas-bot, Yobot, Daniele Pugliesi,

12.1. TEXT

251

Citation bot 1, Pinethicket, Agrasa, RjwilmsiBot, Ankid, EmausBot, ZéroBot, Redhanker, AManWithNoPlan, Stwalczyk, Whoop whoop pull up, Mjbmrbot, ClueBot NG, Helpful Pixie Bot, Bibcode Bot, BG19bot, Mn-imhotep, Eio, Mogism, Anrnusna, Trackteur and Anonymous: 44 • Thermal expansion Source: https://en.wikipedia.org/wiki/Thermal_expansion?oldid=708928126 Contributors: Fred Bauder, Delirium, Andrewman327, Cdang, Giftlite, BenFrantzDale, Alexf, Deewiant, Thorsten1, Grm wnr, ChrisRuvolo, Vsmith, Bender235, Quietly, Art LaPella, Hooperbloob, Knucmo2, Zachlipton, Alansohn, PAR, Snowolf, TaintedMustard, Gene Nygaard, StuTheSheep, Linas, Mindmatrix, Aidanlister, Pol098, Firien, Knuckles, Prashanthns, Susato, Paxsimius, Mandarax, NCdave, Jclemens, Nanite, Rjwilmsi, Matt Deres, ACrush, Gurch, Chobot, YurikBot, Charles Gaudette, Akamad, Alex Bakharev, ArcticFlame, Grafen, Dhollm, Moe Epsilon, DeadEyeArrow, CWenger, GrinBot~enwiki, That Guy, From That Show!, Luk, Yvwv, SmackBot, Slashme, Da2ce7, Eupedia, Gilliam, Reza1615, EndingPop, Mion, Harryboyles, ML5, Paladinwannabe2, Dan Gluck, Wizard191, Iridescent, Courcelles, Mcginnly, Ironmagma, Saintrain, Thijs!bot, Epbr123, Headbomb, Nick Number, Escarbot, Porqin, QuiteUnusual, RogueNinja, JAnDbot, Ibjt4ever, Jinxinzzi, Asplace, Bongwarrior, VoABot II, JamesBWatson, Christophe.Finot, Raggiante~enwiki, Cardamon, R'n'B, Zygimantus, Eybot~enwiki, J.delanoy, Trusilver, Dani setiawan, Mike.lifeguard, Davidprior, Afluegel, Jcwf, TomasBat, In Transit, STBotD, Ojovan, AntoniusJ~enwiki, Squids and Chips, WOSlinker, Hqb, Leaf of Silver, Claidheamohmor, Gerakibot, Yintan, Mothmolevna, Chromaticity, Masgatotkaca, Csloomis, OKBot, AllHailZeppelin, Kanonkas, ClueBot, Sealsrock!, The Thing That Should Not Be, Ken l lee, Mild Bill Hiccup, Harland1, Largedizkool, Adrian dakota, DragonBot, Awickert, CohesionBot, PixelBot, Leonard^Bloom, P1415926535, La Pianista, Ammm3478, 1ForTheMoney, Ngebbett, David.Boettcher, Addbot, Xp54321, Otisjimmy1, Chzz, Jgrosay~enwiki, Quercus solaris, Tide rolls, Teles, Karthik3186, Yobot, Zaereth, AnomieBOT, Götz, Piano non troppo, Materialscientist, E235, Citation bot, Clark89, LilHelpa, Xqbot, Qq19342174, Cristianrodenas, RibotBOT, Kyng, Dpinna85, Dan6hell66, Jatosado, Black.jeff, Pinethicket, A8UDI, Serols, ‫کاشف عقیل‬, Tbhotch, RjwilmsiBot, MagnInd, Bento00, DASHBot, Hhhippo, Pololei, Confession0791, AManWithNoPlan, Puffin, RockMagnetist, Teaktl17, ClueBot NG, Ronaldjo, Gareth Griffith-Jones, Satellizer, Ulrich67, Mmarre, Helpful Pixie Bot, Bibcode Bot, BG19bot, Angry birds fan Club, Dentalplanlisa, Eio, Arc1977, BattyBot, Tmariem, Mahmud Halimi Wardag, Owoturo tboy, YannLar, Csuino, TwoTwoHello, Hwangrox99, QueenMisha, Reatlas, LukeMcMahon, Katelyn.kitzinger, Alexwho314, Aguner, Lektio, Prokaryotes, Ginsuloft, Stamptrader, JOb, VolpeCenter, Emaw61, Monkbot, Jkutil18, Mybalonyhasafirstname, Trackteur, R-joven, Richard Hebb, DiscantX, Deepak pandey mj, JenniferBaeuml, Pusith95 and Anonymous: 320 • Thermodynamic potential Source: https://en.wikipedia.org/wiki/Thermodynamic_potential?oldid=714210912 Contributors: Xavic69, Michael Hardy, Cimon Avaro, Trainspotter~enwiki, Terse, Phil Boswell, Aetheling, Giftlite, Karol Langner, Icairns, Edsanville, Willhsmit, Discospinster, El C, Pearle, Keenan Pepper, PAR, Fawcett5, Count Iblis, V8rik, Rjwilmsi, JillCoffin, ChrisChiasson, GangofOne, Wavesmikey, Chaos, Dhollm, Bota47, That Guy, From That Show!, SmackBot, Incnis Mrsi, Pavlovič, Bomac, Kmarinas86, MalafayaBot, Huwmanbeing, Cybercobra, Drphilharmonic, Sadi Carnot, Eli84, Kareemjee, Ring0, LeBofSportif, Headbomb, JAnDbot, Magioladitis, Joshua Davis, Dorgan, Lseixas, Sheliak, VolkovBot, Larryisgood, VasilievVV, A4bot, Nightwoof, Fractalizator, Kbrose, Hobojaks, SieBot, Thermodude, Pinkadelica, EoGuy, Tizeff, Niceguyedc, Vql, Alexbot, Addbot, DOI bot, Steven0309, Download, Numbo3-bot, Serge Lachinov, Yobot, Fragaria Vesca, Ptbotgourou, Aboalbiss, Rubinbot, Danno uk, Citation bot, ArthurBot, LilHelpa, Lianglei0304, FrescoBot, DrilBot, EmausBot, WikitanvirBot, Netheril96, Dcirovic, Shivankmehra, SporkBot, Helpful Pixie Bot, BG19bot, F=q(E+v^B), ArmbrustBot, JOb, Monkbot and Anonymous: 46 • Enthalpy Source: https://en.wikipedia.org/wiki/Enthalpy?oldid=717005226 Contributors: Bryan Derksen, Taw, Toby Bartels, Peterlin~enwiki, Edward, Llywrch, Kku, Gbleem, Looxix~enwiki, Darkwind, Julesd, AugPi, Smack, Ehn, Omegatron, Lumos3, Gentgeen, Robbot, Fredrik, Chris 73, Puckly, Caknuck, Lupo, Diberri, Buster2058, Connelly, Giftlite, Donvinzk, Markus Kuhn, Bensaccount, Luigi30, Glengarry, LucasVB, Gunnar Larsson, Karol Langner, Neffk, Icairns, C4~enwiki, Tsemii, Mike Rosoft, Discospinster, Rich Farmbrough, Guanabot, ZeroOne, RoyBoy, Kedmond, Atraxani, Giraffedata, Helix84, Sam Korn, Mdd, Benjah-bmm27, PAR, BernardH, Dagimar, Count Iblis, Drat, Dirac1933, Vuo, Gene Nygaard, Wesley Moy, StradivariusTV, Isnow, Palica, Mandarax, BD2412, JonathanDursi, Yurik, Eteq, Tlroche, Pasky, Dar-Ape, FlaBot, Jrtayloriv, TeaDrinker, Don Gosiewski, Srleffler, Physchim62, Flying Jazz, YurikBot, Wavelength, TexasAndroid, Jimp, Brandmeister (old), Dotancohen, Chaos, Salsb, Banes, Dhollm, Tony1, Someones life, Izuko, Cmcfarland, Jrf, RG2, Infinity0, Mejor Los Indios, Tom Morris, Itub, Sardanaphalus, SmackBot, Slashme, Bomac, Edgar181, Kdliss, Betacommand, JSpudeman, Kmarinas86, Bduke, Master of Puppets, Complexica, JoeBlogsDord, Sciyoshi~enwiki, DHN-bot~enwiki, Skatche, Sbharris, Colonies Chris, JohnWheater, TheKMan, Fbianco, Drphilharmonic, Sadi Carnot, Ohconfucius, Spiritia, SashatoBot, Mgiganteus1, The real bicky, Beetstra, Teeteetee, Spiel496, Willandbeyond, Happy-melon, Gosolowe, Az1568, Dc3~enwiki, Mikiemike, Robbyduffy, WeggeBot, Grj23, Karenjc, Myasuda, Mct mht, Gregbard, Phdrahmed, Yaris678, Cydebot, Kupirijo, Llort, Christian75, Viridae, Tunheim, Chandni chn, Thijs!bot, Runch, Odyssey1989, Headbomb, John254, F l a n k e r, Dawnseeker2000, Escarbot, The Obento Musubi, Teentje, Gioto, Seaphoto, Madbehemoth, Ani td, JAnDbot, Hans Mayer, MER-C, Larrybaxter, RebelRobot, JamesBWatson, Dirac66, User A1, DerHexer, JamMan, Gwern, MartinBot, JCraw, Keith D, Pbroks13, Felixbecker2, Hairchrm, S1dorner, Rlsheehan, Numbo3, Salih, Ohms law, BlGene, Smitjo, DorganBot, Useight, Lseixas, Sheliak, AlnoktaBOT, VasilievVV, TXiKiBoT, Jomasecu, BertSen, A4bot, Anonymous Dissident, Broadbot, Mezzaluna, Venny85, Nobull67, Andy Dingley, Yk Yk Yk, GauteHope, Riick, AlleborgoBot, Neparis, LOTRrules, Kbrose, SieBot, Spartan, ToePeu.bot, Phe-bot, Matthew Yeager, Conairh, Antonio Lopez, Evilstudent, WikiLaurent, Dolphin51, Tuntable, ClueBot, Hjlim, Qhudspeth, Wikisteff, Jusdafax, P. M. Sakkas, Morekitsch, Pdch, Ngebendi, Natty sci~enwiki, Thehelpfulone, AC+79 3888, Qwfp, Egmontaz, Crowsnest, Rreagan007, Gonfer, Some jerk on the Internet, Wickey-nl, EconoPhysicist, Causticorulos, Wakeham, Tide rolls, Lightbot, Gail, Margin1522, Legobot, Yobot, Amirobot, KamikazeBot, KarlHegbloom, TimeVariant, AnomieBOT, Daniele Pugliesi, Materialscientist, ArthurBot, LilHelpa, Xqbot, Br77rino, GrouchoBot, Omnipaedista, RibotBOT, Kyng, Vikky2904, Bytbox, FrescoBot, Citation bot 1, Winterst, AMSask, Lesath, Jandalhandler, TobeBot, Tehfu, Begomber, Matlsarefun, Diannaa, Sergius-eu, EmausBot, John of Reading, Faraz shaukat ali, KHamsun, Trinibones, Hhhippo, Grondilu, Shivankmehra, Raggot, Flag cloud, Jadzia2341, Vacant999, Elaz85, Scientific29, RockMagnetist, DASHBotAV, Xanchester, Mikhail Ryazanov, ClueBot NG, Senthilvel32, Mesoderm, TransportObserver, Helpful Pixie Bot, Calabe1992, Bibcode Bot, BG19bot, Hz.tiang, J991, Kookookook, Bioe205fun, ChrisGualtieri, Adwaele, Emresulun93, BeaumontTaz, Frosty, Gaurav.gautam17, Mark viking, Coleslime5403, Bruce Chen 0010334, Jianhui67, Stevengus, Elenceq, AKS.9955, Jim Carter, Voluntas V, Yesufu29, Undefined51 and Anonymous: 353 • Internal energy Source: https://en.wikipedia.org/wiki/Internal_energy?oldid=717018178 Contributors: Bryan Derksen, Peterlin~enwiki, Patrick, Michael Hardy, SebastianHelm, Cyan, Andres, J D, Robbot, Hankwang, Fabiform, Giftlite, Andries, Dratman, Bensaccount, Bobblewik, H Padleckas, Icairns, Edsanville, Spiko-carpediem~enwiki, El C, Shanes, Euyyn, Kine, Nhandler, Haham hanuka, Lysdexia, PAR, Count Iblis, RainbowOfLight, Reaverdrop, GleasSpty, Isnow, BD2412, Qwertyus, Saperaud~enwiki, Rjwilmsi, Thechamelon, HappyCamper, Margosbot~enwiki, ChrisChiasson, DVdm, Bgwhite, YurikBot, RussBot, Stassats, Dhollm, 2over0, RG2, SmackBot, Oloumi, David Shear, KocjoBot~enwiki, Ddcampayo, BirdValiant, ThorinMuglindir, Zgyorfi~enwiki, Persian Poet Gal, MalafayaBot, Complexica, DHN-bot~enwiki, Sbharris, Rrburke, AFP~enwiki, Henning Makholm, Sadi Carnot, Vina-iwbot~enwiki, Stikonas, Vaughan Pratt, CmdrObot, Xanthoxyl, Cydebot,

252

CHAPTER 12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

Christian75, Omicronpersei8, Barticus88, Headbomb, Bobblehead, Mr pand, Ste4k, Trakesht, JAnDbot, PhilKnight, Davidtwu, Magioladitis, VoABot II, Cardamon, MartinBot, R'n'B, LedgendGamer, Pdcook, Lseixas, Squids and Chips, Sheliak, VolkovBot, TXiKiBoT, SQL, Riick, SHL-at-Sv, Kbrose, SieBot, Da Joe, The way, the truth, and the light, Andrewjlockley, Dolphin51, Atif.t2, ClueBot, The Thing That Should Not Be, Mild Bill Hiccup, SuperHamster, Djr32, CohesionBot, Jusdafax, DeltaQuad, Hans Adler, ChrisHodgesUK, Crowsnest, Avoided, Hess88, Thatguyflint, Addbot, Xp54321, DOI bot, Arcturus87, Aboctok, Morning277, CarsracBot, PV=nRT, Luckas-bot, Yobot, Fraggle81, Becky Sayles, AnomieBOT, Daniele Pugliesi, Ipatrol, Materialscientist, The High Fin Sperm Whale, Citation bot, LilHelpa, Xqbot, J04n, GrouchoBot, Mnmngb, MLauba, Vatbey, Chjoaygame, Maghemite, RWG00, Cannolis, Citation bot 1, DrilBot, Pinethicket, Jonesey95, MastiBot, RazielZero, FoxBot, Derild4921, Gosnap0, Artemis Fowl III, LcawteHuggle, John of Reading, WikitanvirBot, Max139, Dewritech, Faolin42, Sportgirl426, GoingBatty, Googamooga, Mmeijeri, Hhhippo, JSquish, Fæ, Timmytoddler, Qclijun, Vramasub, ClueBot NG, NuclearEnergy, Mariraja2007, Cky2250, Aisteco, Acratta, Adwaele, Qsq, Eli4ph, Jamesx12345, Galobtter, Anaekh, SkateTier, Trackteur, The Last Arietta, Scipsycho, LuFangwen, Amangautam1995, Stemwinders, Todyreli and Anonymous: 166 • Ideal gas law Source: https://en.wikipedia.org/wiki/Ideal_gas_law?oldid=717248295 Contributors: CYD, Vicki Rosenzweig, Bryan Derksen, Tarquin, Andre Engels, William Avery, SimonP, FlorianMarquardt, Patrick, JakeVortex, BrianHansen~enwiki, Mark Foskey, Vivin, Tantalate, Ozuma~enwiki, Robbot, COGDEN, Wereon, Isopropyl, Mattflaschen, Enochlau, Alexwcovington, Giftlite, Bensaccount, Alexf, Karol Langner, Icairns, ELApro, Mike Rosoft, Venu62, Noisy, Discospinster, Hydrox, Vsmith, Femto, Grick, Bobo192, Avathar~enwiki, Larryv, Riana, Lee S. Svoboda, Shoefly, Gene Nygaard, StradivariusTV, Kmg90, Johan Lont, Mandarax, MassGalactusUniversum, Jan van Male, Eteq, Rjwilmsi, Koavf, Sango123, FlaBot, Intersofia, Jrtayloriv, Fresheneesz, Scroteau96, SteveBaker, Physchim62, Krishnavedala, ARAJ, YurikBot, Huw Powell, Jimp, Quinlan Vos~enwiki, Gaius Cornelius, CambridgeBayWeather, LMSchmitt, Bb3cxv, Dhollm, Ruhrfisch, Acit, Zwobot, T, Someones life, User27091, Smaines, WAS 4.250, 2over0, U.S.Vevek, Nlitement, Junglecat, Bo Jacoby, Bwiki, SmackBot, Mitchan, Incnis Mrsi, Sal.farina, Pennywisdom2099, Dave19880, Kmarinas86, Bluebot, Kunalmehta, Silly rabbit, Tianxiaozhang~enwiki, CSWarren, DHN-bot~enwiki, Metal Militia, Berland, Samir.Mesic, Ollien, PiMaster3, G716, Foxhunt king, Just plain Bill, SashatoBot, Esrever, Mbeychok, JorisvS, IronGargoyle, Ranmoth, Carhas0, Peter Horn, Sifaka, Majora4, Mikiemike, MC10, Astrochemist, Kimtaeil, Christian75, Thijs!bot, Headbomb, Jakirkham, Electron9, RedWasp, Hmrox, AntiVandalBot, KMossey, Nehahaha, Seaphoto, Prolog, Coolhandscot, Fern Forest, AdamGomaa, Magioladitis, VoABot II, Baccyak4H, Kittyemo, ANONYMOUS COWARD0xC0DE, JaGa, Nirupambits, MartinBot, Rock4p, Mbweissman, J.delanoy, SimpsonDG, P.wormer, Nwbeeson, Habadasher, Juliancolton, KudzuVine, Nasanbat, VolkovBot, Error9312, Drax Conqueror, Barneca, Philip Trueman, Rbingama, TXiKiBoT, Malinaccier, Comtraya, Rexeken, LanceBarber, Hanjabba, Riick, Brianga, Hoopssheaffer, Givegains, SieBot, Flyer22 Reborn, Baxter9, Oxymoron83, Smaug123, 123ilikecheese, Lightmouse, JerroldPease-Atlanta, Nskillen, COBot, Adamtester, Thomjakobsen, Pinkadelica, ClueBot, GorillaWarfare, Kharazia, The 888th Avatar, Vql, Jmk, Excirial, Pdch, DumZiBoT, Hseo, TZGreat, Frood, RP459, QuantumGlow, Dj-dios-del-sol, SkyLined, Dnvrfantj, Addbot, Power.corrupts, LaaknorBot, Eelpop, CarsracBot, LinkFA-Bot, Lightbot, Loupeter, Legobot, Yobot, Ptbotgourou, Daniele Pugliesi, JackieBot, Materialscientist, Nickkid5, Quark1005, Craftyminion, GrouchoBot, ChristopherKingChemist, RibotBOT, ‫قلی زادگان‬, Dougofborg, Kamran28, Khakiandmauve, StephenWade, EntropyTrap, Lambda(T), Happydude69 yo, Mrahner, Michael93555, D'ohBot, RWG00, Zmcdargh, Citation bot 1, Kishmakov, I dream of horses, RedBot, Pbsouthwood, Cramyourspam, Orenburg1, Geraldo62, Diblidabliduu, Jade Harley, RjwilmsiBot, MagnInd, Steve Belkins, EmausBot, Tdindorf, Razor2988, RA0808, Jerry858, Dcirovic, Ssp37097, JSquish, ZéroBot, Susfele, MarkclX, Stovl, SporkBot, YnnusOiramo, Donner60, Odysseus1479, Theislikerice, RockMagnetist, George Makepeace, ClueBot NG, BubblyWantedXx, Helloimriley, Wrecker1431, Rezabot, Widr, Bibcode Bot, Mariansavu, MusikAnimal, AvocatoBot, Mark Arsten, Trevayne08, F=q(E+v^B), Klilidiplomus, TechNickL1, Egm4313.s12, NJIT HUMrudyh, NJIT HUMNV, Waterproof-breathable, AlanParkerFrance, Dexbot, Epicgenius, I am One of Many, Blackbombchu, Zenibus, JustBerry, Ginsuloft, Keojukwu, DudeWithAFeud, Whizzy1999, Fuguangwei, Evanrelf, Monkbot, Nojedi, Trackteur, ChaquiraM, Smanojprabhakar, Ériugena, CAPTAIN RAJU, Pwags3147, The Master 6969, Qzd, KapteynCook and Anonymous: 400 • Fundamental thermodynamic relation Source: https://en.wikipedia.org/wiki/Fundamental_thermodynamic_relation?oldid=704331825 Contributors: PAR, Batmanand, Count Iblis, John Baez, Dhollm, Katieh5584, SmackBot, Betacommand, Sadi Carnot, Dicklyon, Robomojo, Ahjulsta, Towerman86, Gogobera, BertSen, Kbrose, SieBot, ClueBot, CohesionBot, Addbot, Tnowotny, PV=nRT, LucienBOT, KHamsun, Netheril96, ZéroBot, Makecat, BG19bot, Liquidityinsta, Mela widiawati, Klaus Schmidt-Rohr and Anonymous: 23 • Heat engine Source: https://en.wikipedia.org/wiki/Heat_engine?oldid=716542100 Contributors: Mav, The Anome, Stokerm, Mirwin, Roadrunner, Jdpipe, Heron, Icarus~enwiki, Isis~enwiki, Ram-Man, Ubiquity, Kku, Delirium, Ronz, CatherineMunro, Glenn, GCarty, Charles Matthews, Tantalate, Far neil, Greenrd, Omegatron, Lumos3, Phil Boswell, Robbot, Academic Challenger, Cyrius, Cutler, Buster2058, Ancheta Wis, Mat-C, Wolfkeeper, Tom harrison, Mcapdevila, Pashute, PlatinumX, LiDaobing, Karol Langner, Oneiros, NathanHurst, Rich Farmbrough, Vsmith, Liberatus, Femto, Rbj, Jwonder, Jung dalglish, Giraffedata, Nk, Exomnium, Alansohn, PAR, Gene Nygaard, Oleg Alexandrov, Garylhewitt, Fingers-of-Pyrex, Peter Beard, WadeSimMiser, Rtdrury, Rjwilmsi, Lionelbrits, Maustrauser, Fresheneesz, Lmatt, Scimitar, Chobot, DVdm, Triku~enwiki, Siddhant, YurikBot, Wavelength, Borgx, JabberWok, Gaius Cornelius, Wimt, Anomalocaris, Eb Oesch, Dhollm, Scs, Tony1, Bota47, Nikkimaria, Lio , Back ache, A Doon, ArielGold, RG2, Eric Norby, GrinBot~enwiki, SkerHawx, SmackBot, Gilliam, Bluebot, Exprexxo, Complexica, Mbertsch, SundarBot, Bob Castle, Sadi Carnot, Adsllc, Loodog, Beetstra, Stikonas, Dodo bird, Mfield, Hu12, MFago, GDallimore, IanOfNorwich, Mikiemike, BFD1, CuriousEric, Dwolsten, Chris23~enwiki, Cydebot, Odie5533, Michael C Price, DumbBOT, RottweilerCS, Efranco~enwiki, Ϙ, Gralo, Headbomb, Paquitotrek, Strongriley, Northumbrian, EdJogg, TimVickers, Aspensti, JAnDbot, Andrew Swallow, Mauk2, VoABot II, Rich257, JMBryant, Catgut, Animum, Allstarecho, Jtir, Rettetast, Tom Gundtofte-Bruun, Fredrosse, Nono64, FactsAndFigures, Ignacio Icke, Lbeaumont, Andejons, STBotD, WarFox, Engware, Lseixas, Funandtrvl, VolkovBot, Larryisgood, TXiKiBoT, NPrice, LeaveSleaves, Abjkf, Senpai71, Why Not A Duck, SieBot, Gerakibot, Viskonsas, Flyer22 Reborn, Oxymoron83, Animagi1981, YinZhang, Robvanbasten, Dolphin51, Martarius, ClueBot, Toy 121, Arunsingh16, AdrianAbel, Thingg, Vilkapi, YouRang?, Gonfer, Kbdankbot, Klundarr, Addbot, LaaknorBot, CarsracBot, Vyom25, Tide rolls, Lightbot, ‫ماني‬, Loupeter, Megaman en m, Legobot, Luckas-bot, Yobot, Pentajism, Typenolies, AnomieBOT, Daniele Pugliesi, Jim1138, Piano non troppo, Theseeker4, Bluerasberry, Citation bot, LovesMacs, Jeriee, In fact, Shadowjams, GliderMaven, Thayts, ‫محمد طاهر عيسى‬, Steve Quinn, HamburgerRadio, Lotje, Antipastor, Jfmantis, Orphan Wiki, Sheeana, Hhhippo, ZéroBot, DavidMCEddy, Matt tuke, Wagino 20100516, Yerocus, Peterh5322, Teapeat, Rememberway, ClueBot NG, Anagogist, Loopy48, Teep111, Widr, Calabe1992, Bibcode Bot, Lowercase sigmabot, BG19bot, MusikAnimal, Zedshort, O8h7w, BattyBot, Bangjiwoo, TooComplicated, Prokaryotes, Monkbot, Tashi19, IvanZhilin, KasparBot, Valaratar, Klaus Schmidt-Rohr, Azamali1947 and Anonymous: 205 • Thermodynamic cycle Source: https://en.wikipedia.org/wiki/Thermodynamic_cycle?oldid=687246415 Contributors: Glenn, Robbot, Wolfkeeper, Dratman, H Padleckas, CDN99, Kjkolb, Gene Nygaard, Palica, Ttjoseph, Siddhant, YurikBot, Borgx, Dhollm, Troodon~enwiki, Cov-

12.2. IMAGES

253

ington, KnightRider~enwiki, SmackBot, Gilliam, Bluebot, Tsca.bot, Ryan Roos, DMacks, Mion, Sadi Carnot, UberCryxic, Mbeychok, EmreDuran, Mig8tr, Ring0, Teratornis, Zanhsieh, Thijs!bot, Headbomb, MSBOT, JamesBWatson, Akhil999in, Jtir, MartinBot, Sigmundg, Felipebm, Andy Dingley, Kropotkine 113, Treekids, Ariadacapo, Turbojet, Sylvain.quoilin, Erodium, Cerireid, Skarebo, Addbot, CarsracBot, Yobot, AnomieBOT, Shadowjams, Samwb123, I dream of horses, Bluefist, AXRL, EmausBot, WikitanvirBot, Frostbite sailor, Allforrous, Donner60, ChuispastonBot, ClueBot NG, Incompetence, Guy vandegrift, Zedshort, APerson, Anushrut93, Faizan, Scie8, Hjd28 and Anonymous: 45

12.2 Images • File:13-07-23-kienbaum-unterdruckkammer-33.jpg Source: https://upload.wikimedia.org/wikipedia/commons/e/eb/ 13-07-23-kienbaum-unterdruckkammer-33.jpg License: CC BY 3.0 Contributors: Own work Original artist: Ralf Roletschek • File:1D_normal_modes_(280_kB).gif Source: https://upload.wikimedia.org/wikipedia/commons/9/9b/1D_normal_modes_%28280_kB% 29.gif License: CC-BY-SA-3.0 Contributors: This is a compressed version of the Image:1D normal modes.gif phonon animation on Wikipedia Commons that was originally created by Régis Lachaume and freely licensed. The original was 6,039,343 bytes and required long-duration downloads for any article which included it. This version is 4.7% the size of the original and loads much faster. This version also has an interframe delay of 40 ms (v.s. the original’s 100 ms). Including processing time for each frame, this version runs at a frame rate of about 20–22.5 Hz on a typical computer, which yields a more fluid motion. Greg L 00:41, 4 October 2006 (UTC). (from http://en.wikipedia.org/wiki/Image: 1D_normal_modes_%28280_kB%29.gif) Original artist: Original Uploader was Greg L (talk) at 00:41, 4 October 2006. • File:Adiabatic.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/49/Adiabatic.svg License: CC-BY-SA-3.0 Contributors: Image:Adiabatic.png Original artist: User:Stannered • File:Aluminium_cylinder.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/8a/Aluminium_cylinder.jpg License: CC BY-SA 3.0 Contributors: Own work Original artist: • File:Ambox_important.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b4/Ambox_important.svg License: Public domain Contributors: Own work, based off of Image:Ambox scales.svg Original artist: Dsmurat (talk · contribs) • File:Anders_Celsius.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/9f/Anders_Celsius.jpg License: Public domain Contributors: This is a cleaned up version of what appears at The Uppsala Astronomical Observatory, which is part of Uppsala University. The full-size original image of the painting appears here, which can be accessed via this history page at the observatory’s Web site. Original artist: Olof Arenius • File:Barometer_mercury_column_hg.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/b9/Barometer_mercury_column_hg. jpg License: CC BY-SA 2.5 Contributors: Own work Original artist: Hannes Grobe 19:02, 3 September 2006 (UTC) • File:Benjamin_Thompson.jpg Source: https://upload.wikimedia.org/wikipedia/commons/3/3c/Benjamin_Thompson.jpg License: Public domain Contributors: http://www.sil.si.edu/imagegalaxy/imagegalaxy_imageDetail.cfm?id_image=3087 http://www.sil.si.edu/digitalcollections/hst/scientific-identity/CF/by_name_display_results.cfm?scientist=Rumford,%20Benjamin% 20Thompson,%20Count Original artist: Not specified[1][2] • File:Boltzmann2.jpg Source: https://upload.wikimedia.org/wikipedia/commons/a/ad/Boltzmann2.jpg License: Public domain Contributors: Uni Frankfurt Original artist: Unknown • File:Brayton_cycle.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3c/Brayton_cycle.svg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Can_T=0_be_reached.jpg Source: https://upload.wikimedia.org/wikipedia/en/c/c7/Can_T%3D0_be_reached.jpg License: CC-BY-SA3.0 Contributors: Made by SliteWrite Original artist: Adwaele • File:Carl_von_Linné.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/68/Carl_von_Linn%C3%A9.jpg License: Public domain Contributors: Nationalmuseum press photo, cropped with colors slightly adjusted Original artist: Alexander Roslin • File:Carnot2.jpg Source: https://upload.wikimedia.org/wikipedia/commons/e/ec/Carnot2.jpg License: Public domain Contributors: ? Original artist: ? • File:Carnot_engine_(hot_body_-_working_body_-_cold_body).jpg Source: https://upload.wikimedia.org/wikipedia/commons/c/c7/ Carnot_engine_%28hot_body_-_working_body_-_cold_body%29.jpg License: Public domain Contributors: Own work (Original text: I (Libb Thims (talk)) created this work entirely by myself.) Original artist: Libb Thims (talk) • File:Carnot_heat_engine_2.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/22/Carnot_heat_engine_2.svg License: Public domain Contributors: Based upon Image:Carnot-engine.png Original artist: Eric Gaba (Sting - fr:Sting)

254

CHAPTER 12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

• File:Clausius-1.jpg Source: https://upload.wikimedia.org/wikipedia/commons/3/34/Clausius-1.jpg License: Public domain Contributors: unknown Original artist: Unknown • File:Clausius.jpg Source: https://upload.wikimedia.org/wikipedia/commons/4/40/Clausius.jpg License: Public domain Contributors: http:// www-history.mcs.st-andrews.ac.uk/history/Posters2/Clausius.html Original artist: Original uploader was user:Sadi Carnot at en.wikipedia • File:Close-packed_spheres.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/8e/Close-packed_spheres.jpg License: CC-BYSA-3.0 Contributors: English Wikipedia Original artist: User:Greg L • File:Coefficient_dilatation_lineique_aciers.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b2/Coefficient_dilatation_ lineique_aciers.svg License: CC0 Contributors: Own work, data from OTUA Original artist: Cdang • File:Coefficient_dilatation_volumique_isobare_PP_semicristallin_Tait.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/ 6e/Coefficient_dilatation_volumique_isobare_PP_semicristallin_Tait.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Cdang • File:Commons-logo.svg Source: https://upload.wikimedia.org/wikipedia/en/4/4a/Commons-logo.svg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Crystal_energy.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/14/Crystal_energy.svg License: LGPL Contributors: Own work conversion of Image:Crystal_128_energy.png Original artist: Dhatfield • File:DebyeVSEinstein.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/54/DebyeVSEinstein.jpg License: Public domain Contributors: ? Original artist: ? • File:Dehnungsfuge.jpg Source: https://upload.wikimedia.org/wikipedia/commons/d/d6/Dehnungsfuge.jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Deriving_Kelvin_Statement_from_Clausius_Statement.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/83/ Deriving_Kelvin_Statement_from_Clausius_Statement.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Netheril96 • File:DiatomicSpecHeat1.png Source: https://upload.wikimedia.org/wikipedia/commons/0/07/DiatomicSpecHeat1.png License: Public domain Contributors: Own work Original artist: User:PAR • File:DiatomicSpecHeat2.png Source: https://upload.wikimedia.org/wikipedia/commons/6/64/DiatomicSpecHeat2.png License: Public domain Contributors: Own work Original artist: User:PAR • File:Drikkeglas_med_brud-1.JPG Source: https://upload.wikimedia.org/wikipedia/commons/e/eb/Drikkeglas_med_brud-1.JPG License: CC BY-SA 3.0 Contributors: Own work Original artist: Arc1977 • File:Edit-clear.svg Source: https://upload.wikimedia.org/wikipedia/en/f/f2/Edit-clear.svg License: Public domain Contributors: The Tango! Desktop Project. Original artist: The people from the Tango! project. And according to the meta-data in the file, specifically: “Andreas Nilsson, and Jakub Steiner (although minimally).” • File:Eight_founding_schools.png Source: https://upload.wikimedia.org/wikipedia/commons/8/85/Eight_founding_schools.png License: Public domain Contributors: Own work Original artist: Libb Thims • File:Energy_thru_phase_changes.png Source: https://upload.wikimedia.org/wikipedia/en/1/18/Energy_thru_phase_changes.png License: Cc-by-sa-3.0 Contributors: ? Original artist: ? • File:Entropyandtemp.PNG Source: https://upload.wikimedia.org/wikipedia/commons/9/91/Entropyandtemp.PNG License: CC-BY-SA-3.0 Contributors: Transferred from en.wikipedia to Commons. Original artist: AugPi at English Wikipedia • File:First_law_open_system.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/86/First_law_open_system.svg License: Public domain Contributors: • First_law_open_system.png Original artist: • derivative work: Pbroks13 (talk) • File:Folder_Hexagonal_Icon.svg Source: https://upload.wikimedia.org/wikipedia/en/4/48/Folder_Hexagonal_Icon.svg License: Cc-by-sa-3.0 Contributors: ? Original artist: ? • File:GFImg3.png Source: https://upload.wikimedia.org/wikipedia/commons/3/3d/GFImg3.png License: CC BY 2.5 Contributors: Transferred from en.wikipedia to Commons by Sreejithk2000 using CommonsHelper. Original artist: Engware at English Wikipedia • File:GFImg4.png Source: https://upload.wikimedia.org/wikipedia/commons/8/86/GFImg4.png License: CC BY 2.5 Contributors: Transferred from en.wikipedia to Commons by Sreejithk2000 using CommonsHelper. Original artist: Engware at English Wikipedia • File:Gaylussac.jpg Source: https://upload.wikimedia.org/wikipedia/commons/2/2f/Gaylussac.jpg License: Public domain Contributors: chemistryland.com Original artist: François Séraphin Delpech • File:Green_check.svg Source: https://upload.wikimedia.org/wikipedia/commons/0/03/Green_check.svg License: Public domain Contributors: Derived from Image:Yes check.svg by Gregory Maxwell Original artist: gmaxwell

12.2. IMAGES

255

• File:Guillaume_Amontons.png Source: https://upload.wikimedia.org/wikipedia/commons/c/ca/Guillaume_Amontons.png License: Public domain Contributors: circa 1870: French physicist Guillaume Amontons (1663 - 1705) demonstrates the semaphore in the Luxembourg Gardens, Paris in 1690. Original Publication: From an illustration published in Paris circa 1870. Close-up approximating bust. Original artist: Unknown • File:Heat_engine.png Source: https://upload.wikimedia.org/wikipedia/en/a/a2/Heat_engine.png License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:Helmet_logo_for_Underwater_Diving_portal.png Source: https://upload.wikimedia.org/wikipedia/commons/5/5e/Helmet_ logo_for_Underwater_Diving_portal.png License: Public domain Contributors: This file was derived from Kask-nurka.jpg: Original artist: Kask-nurka.jpg: User:Julo • File:Ice-calorimeter.jpg Source: https://upload.wikimedia.org/wikipedia/commons/3/35/Ice-calorimeter.jpg License: Public domain Contributors: originally uploaded http://en.wikipedia.org/wiki/Image:Ice-calorimeter.jpg Original artist: Originally en:User:Sadi Carnot • File:IceBlockNearJoekullsarlon.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/71/IceBlockNearJoekullsarlon.jpg License: CC BY-SA 4.0 Contributors: Own work Original artist: Andreas Tille • File:Ice_water.jpg Source: https://upload.wikimedia.org/wikipedia/commons/0/0c/Ice_water.jpg License: Public domain Contributors: ? Original artist: ? • File:Ideal_gas_isotherms.png Source: https://upload.wikimedia.org/wikipedia/commons/e/e2/Ideal_gas_isotherms.png License: Public domain Contributors: ? Original artist: ? • File:Ideal_gas_isotherms.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/92/Ideal_gas_isotherms.svg License: CC0 Contributors: Own work Original artist: Krishnavedala • File:Isentropic.jpg Source: https://upload.wikimedia.org/wikipedia/commons/4/4a/Isentropic.jpg License: CC BY-SA 3.0 Contributors: Own work Original artist: Tyler.neysmith • File:Isobaric_process_plain.svg Source: https://upload.wikimedia.org/wikipedia/commons/d/d0/Isobaric_process_plain.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: IkamusumeFan • File:Isochoric_process_SVG.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/9d/Isochoric_process_SVG.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: IkamusumeFan • File:Isothermal_process.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fa/Isothermal_process.svg License: CC0 Contributors: Own work Original artist: Netheril96 • File:JHLambert.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/9b/JHLambert.jpg License: Public domain Contributors: ? Original artist: ? • File:Jacques_Alexandre_César_Charles.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/98/Jacques_Alexandre_C%C3% A9sar_Charles.jpg License: Public domain Contributors: This image is available from the United States Library of Congress's Prints and Photographs division under the digital ID ppmsca.02185. This tag does not indicate the copyright status of the attached work. A normal copyright tag is still required. See Commons:Licensing for more information.

Original artist: Unknown • File:James-clerk-maxwell3.jpg Source: https://upload.wikimedia.org/wikipedia/commons/6/6f/James-clerk-maxwell3.jpg License: Public domain Contributors: ? Original artist: ? • File:Joule’{}s_Apparatus_(Harper’{}s_Scan).png Source: https://upload.wikimedia.org/wikipedia/commons/c/c3/Joule%27s_Apparatus_ %28Harper%27s_Scan%29.png License: Public domain Contributors: Harper’s New Monthly Magazine, No. 231, August, 1869. Original artist: Unknown • File:Linia_dilato.png Source: https://upload.wikimedia.org/wikipedia/commons/d/dc/Linia_dilato.png License: CC BY-SA 3.0 Contributors: Own work Original artist: Walber • File:Liquid_helium_superfluid_phase.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/ba/Liquid_helium_superfluid_ phase.jpg License: Public domain Contributors: Liquid_helium_superfluid_phase.tif Original artist: Bmatulis • File:Maquina_vapor_Watt_ETSIIM.jpg Source: https://upload.wikimedia.org/wikipedia/commons/9/9e/Maquina_vapor_Watt_ETSIIM. jpg License: CC-BY-SA-3.0 Contributors: Enciclopedia Libre Original artist: Nicolás Pérez • File:Maxwell_Dist-Inverse_Speed.png Source: https://upload.wikimedia.org/wikipedia/en/d/d0/Maxwell_Dist-Inverse_Speed.png License: Cc-by-sa-3.0 Contributors: ? Original artist: ?

256

CHAPTER 12. TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

• File:P-v_diagram_of_a_simple_cycle.svg Source: https://upload.wikimedia.org/wikipedia/commons/4/4c/P-v_diagram_of_a_simple_ cycle.svg License: CC0 Contributors: Own work Original artist: Olivier Cleynen • File:PV_plot_adiab_sim.png Source: https://upload.wikimedia.org/wikipedia/commons/1/10/PV_plot_adiab_sim.png License: Public domain Contributors: Own work Original artist: Mikiemike • File:PV_real1.PNG Source: https://upload.wikimedia.org/wikipedia/commons/8/8d/PV_real1.PNG License: CC-BY-SA-3.0 Contributors: Eigenes Archiv Original artist: Pedro Servera († 2005) • File:Parmenides.jpg Source: https://upload.wikimedia.org/wikipedia/commons/e/ed/Parmenides.jpg License: CC-BY-SA-3.0 Contributors: ? Original artist: ? • File:PdV_work_cycle.gif Source: https://upload.wikimedia.org/wikipedia/commons/c/c6/PdV_work_cycle.gif License: CC BY-SA 3.0 Contributors: Own work Original artist: Guy vandegrift • File:PlatformHolly.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/81/PlatformHolly.jpg License: Public domain Contributors: http://www.netl.doe.gov/technologies/oil-gas/Petroleum/projects/EP/ResChar/15127Venoco.htm -- U.S. Department of Energy Original artist: employee of the U.S. government: public domain • File:Polytropic.gif Source: https://upload.wikimedia.org/wikipedia/commons/a/ad/Polytropic.gif License: CC BY-SA 3.0 Contributors: This graphic was created with matplotlib. Original artist: IkamusumeFan • File:Portal-puzzle.svg Source: https://upload.wikimedia.org/wikipedia/en/f/fd/Portal-puzzle.svg License: Public domain Contributors: ? Original artist: ? • File:Pressure_exerted_by_collisions.svg Source: https://upload.wikimedia.org/wikipedia/commons/9/94/Pressure_exerted_by_collisions. svg License: CC BY-SA 3.0 Contributors: Own work, see http://www.becarlson.com/ Original artist: Becarlson • File:Pressure_force_area.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/ff/Pressure_force_area.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Klaus-Dieter Keller • File:Question_book-new.svg Source: https://upload.wikimedia.org/wikipedia/en/9/99/Question_book-new.svg License: Cc-by-sa-3.0 Contributors: Created from scratch in Adobe Illustrator. Based on Image:Question book.png created by User:Equazcion Original artist: Tkgd2007 • File:Rail_buckle.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/b8/Rail_buckle.jpg License: Public domain Contributors: Transferred from en.wikipedia to Commons. Original artist: The original uploader was Trainwatcher at English Wikipedia • File:Rankine_William_signature.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/58/Rankine_William_signature.jpg License: Public domain Contributors: Frontispiece of Miscellaneous Scientific Papers Original artist: William Rankine • File:Real_Gas_Isotherms.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3e/Real_Gas_Isotherms.svg License: CC BY-SA 3.0 Contributors: Own work Original artist: Raoul NK • File:Red_x.svg Source: https://upload.wikimedia.org/wikipedia/en/b/ba/Red_x.svg License: PD Contributors: ? Original artist: ? • File:Robert_Boyle_0001.jpg Source: https://upload.wikimedia.org/wikipedia/commons/b/b3/Robert_Boyle_0001.jpg License: Public domain Contributors: http://www.bbk.ac.uk/boyle/Issue4.html Original artist: Johann Kerseboom • File:SI_base_unit.svg Source: https://upload.wikimedia.org/wikipedia/commons/c/c8/SI_base_unit.svg License: CC BY-SA 3.0 Contributors: I (Dono (talk)) created this work entirely by myself. Base on http://www.newscientist.com/data/images/archive/2622/26221501.jpg Original artist: Dono (talk) • File:Sadi_Carnot.jpeg Source: https://upload.wikimedia.org/wikipedia/commons/8/80/Sadi_Carnot.jpeg License: Public domain Contributors: http://www-history.mcs.st-and.ac.uk/history/PictDisplay/Carnot_Sadi.html Original artist: Louis-Léopold Boilly • File:Savery-engine.jpg Source: https://upload.wikimedia.org/wikipedia/commons/c/cc/Savery-engine.jpg License: Public domain Contributors: Image copy/pasted from http://www.humanthermodynamics.com/HT-history.html Original artist: Institute of Human Thermodynamics and IoHT Publishing Ltd. • File:Schematic_of_compressor.png Source: https://upload.wikimedia.org/wikipedia/commons/3/38/Schematic_of_compressor.png License: CC BY-SA 3.0 Contributors: en:File:Schematic of throttling and compressor 01.jpg Original artist: en:User:Adwaele • File:Schematic_of_throttling.png Source: https://upload.wikimedia.org/wikipedia/commons/8/8f/Schematic_of_throttling.png License: CC BY-SA 3.0 Contributors: en:File:Schematic of throttling and compressor 01.jpg Original artist: en:User:Adwaele • File:Science.jpg Source: https://upload.wikimedia.org/wikipedia/commons/5/54/Science.jpg License: Public domain Contributors: ? Original artist: ? • File:Speakerlink-new.svg Source: https://upload.wikimedia.org/wikipedia/commons/3/3b/Speakerlink-new.svg License: CC0 Contributors: Own work Original artist: Kelvinsong • File:SpongeDiver.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/81/SpongeDiver.jpg License: Public domain Contributors: Own work Original artist: Bryan Shrode • File:Stirling_Cycle.png Source: https://upload.wikimedia.org/wikipedia/commons/d/dc/Stirling_Cycle.png License: CC-BY-SA-3.0 Contributors: Transferred from en.wikipedia to Commons. Original artist: Zephyris at English Wikipedia • File:Stirling_Cycle.svg Source: https://upload.wikimedia.org/wikipedia/commons/2/25/Stirling_Cycle.svg License: Public domain Contributors: Own work Original artist: Nickez

12.3. CONTENT LICENSE

257

• File:Stirling_Cycle_color.png Source: https://upload.wikimedia.org/wikipedia/commons/a/af/Stirling_Cycle_color.png License: Public domain Contributors: I created this modification of the original image (File:Stirling Cycle.svg) to clarify the temperature change that occurs during the Stirling cycle Original artist: Kmote at English Wikipedia • File:Stylised_Lithium_Atom.svg Source: https://upload.wikimedia.org/wikipedia/commons/e/e1/Stylised_Lithium_Atom.svg License: CCBY-SA-3.0 Contributors: based off of Image:Stylised Lithium Atom.png by Halfdan. Original artist: SVG by Indolences. Recoloring and ironing out some glitches done by Rainer Klute. • File:Symbol_book_class2.svg Source: https://upload.wikimedia.org/wikipedia/commons/8/89/Symbol_book_class2.svg License: CC BY-SA 2.5 Contributors: Mad by Lokal_Profil by combining: Original artist: Lokal_Profil • File:Symbol_list_class.svg Source: https://upload.wikimedia.org/wikipedia/en/d/db/Symbol_list_class.svg License: Public domain Contributors: ? Original artist: ? • File:System_boundary.svg Source: https://upload.wikimedia.org/wikipedia/commons/b/b6/System_boundary.svg License: Public domain Contributors: en:Image:System-boundary.jpg Original artist: en:User:Wavesmikey, traced by User:Stannered • File:System_boundary2.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/63/System_boundary2.svg License: CC BY-SA 4.0 Contributors: Own work Original artist: Krauss • File:Temperature-entropy_chart_for_steam,_US_units.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/63/ Temperature-entropy_chart_for_steam%2C_US_units.svg License: CC BY-SA 3.0 Contributors: Own workData retrieved from: E.W. Lemmon, M.O. McLinden and D.G. Friend, “Thermophysical Properties of Fluid Systems” in NIST Chemistry WebBook, NIST Standard Reference Database Number 69, Eds. P.J. Linstrom and W.G. Mallard, National Institute of Standards and Technology, Gaithersburg MD, 20899, http://webbook.nist.gov, (retrieved November 2, 2010).) Original artist: Emok • File:Thermally_Agitated_Molecule.gif Source: https://upload.wikimedia.org/wikipedia/commons/2/23/Thermally_Agitated_Molecule.gif License: CC-BY-SA-3.0 Contributors: http://en.wikipedia.org/wiki/Image:Thermally_Agitated_Molecule.gif Original artist: en:User:Greg L • File:Thermodynamics.png Source: https://upload.wikimedia.org/wikipedia/commons/3/3d/Thermodynamics.png License: CC BY-SA 3.0 Contributors: Own work Original artist: Miketwardos • File:Translational_motion.gif Source: https://upload.wikimedia.org/wikipedia/commons/6/6d/Translational_motion.gif License: CC-BYSA-3.0 Contributors: English Wikipedia Original artist: A.Greg, en:User:Greg L • File:Triple_expansion_engine_cropped.png Source: https://upload.wikimedia.org/wikipedia/commons/3/33/Triple_expansion_engine_ cropped.png License: CC BY 2.5 Contributors: crop of en::Image:Triple_expansion_engine_animation.gif Original artist: Emoscopes • File:Ts_diagram_of_N2_02.jpg Source: https://upload.wikimedia.org/wikipedia/en/0/03/Ts_diagram_of_N2_02.jpg License: CC-BY-SA3.0 Contributors: made with slitewrite Original artist: Adwaele • File:Wiens_law.svg Source: https://upload.wikimedia.org/wikipedia/commons/a/a2/Wiens_law.svg License: CC-BY-SA-3.0 Contributors: Own work based on JPG version Curva Planck TT.jpg Original artist: 4C • File:Wiki_letter_w_cropped.svg Source: https://upload.wikimedia.org/wikipedia/commons/1/1c/Wiki_letter_w_cropped.svg License: CCBY-SA-3.0 Contributors: This file was derived from Wiki letter w.svg: Original artist: Derivative work by Thumperward • File:Wikiquote-logo.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/fa/Wikiquote-logo.svg License: Public domain Contributors: ? Original artist: ? • File:Wiktionary-logo-en.svg Source: https://upload.wikimedia.org/wikipedia/commons/f/f8/Wiktionary-logo-en.svg License: Public domain Contributors: Vector version of Image:Wiktionary-logo-en.png. Original artist: Vectorized by Fvasconcellos (talk · contribs), based on original logo tossed together by Brion Vibber • File:Willard_Gibbs.jpg Source: https://upload.wikimedia.org/wikipedia/commons/8/8b/Willard_Gibbs.jpg License: Public domain Contributors: ? Original artist: ? • File:William_Thomson_1st_Baron_Kelvin.jpg Source: https://upload.wikimedia.org/wikipedia/commons/d/de/William_Thomson_1st_ Baron_Kelvin.jpg License: Public domain Contributors: From http://ihm.nlm.nih.gov/images/B16057 (via en.wikipedia as Image:Lord+ Kelvin.jpg/all following user names refer to en.wikipedia): Original artist: Unknown • File:Zero-point_energy_v.s._motion.jpg Source: https://upload.wikimedia.org/wikipedia/commons/7/79/Zero-point_energy_v.s._motion. jpg License: CC-BY-SA-3.0 Contributors: Transferred from en.wikipedia to Commons by Undead_warrior using CommonsHelper. Original artist: Greg L at English Wikipedia

12.3 Content license • Creative Commons Attribution-Share Alike 3.0