In this paper, we propose a hierarchical mission planner where the state of the world and of the mission are abstracted into corresponding states of a Markov Decision Process (MDP). Transitions in the MDP represent abstract motion actions that are planned by a lower level probabilistic planner. The cost structure of the MDP is multi-dimensional: each state-action pair is annotated with a vector of metrics such as time and resource requirements. A mission specification is divided into three parts: a temporal logic formula defined over state propositions, the choice of the primary cost, and constraints on the remaining secondary costs. The planning problem is formulated as finding the optimal policy of a Constrained Markov Decision Process with above mission specification. The resulting planning system is tested in a mission where an agent is tasked with a complex mission in a urban hostile environment.