This book is designed for a first course in nonlinear optimization. It starts with classical optimization notions from calculus and proceeds smoothly to a study of convex functions. This is followed by material on basic numerical methods, least squares, Karush-Kuhn-Tucker theory, penalty functions, and Lagrange multipliers. The book has been tested in the classroom; the approach is rigorous at all times and geometric intuition is developed. The numerical methods are up-to-date. The presentation emphasizes the mathematical ideas behind computer codes.
The book is aimed at the student who has a working knowledge of linear algebra and partial differentiation but has had no previous exposure to optimization. Mathematics instructors will be comfortable with the mathematical approach which deemphasizes recipes and emphasizes understanding underlying concepts. There are many exercises chosen to highlight the fundamental ideas.