Discontinuous control problems for non-convex dynamics and near viability for singularly perturbed control systems

https://doi.org/10.1016/j.na.2010.06.050Get rights and content

Abstract

The aim of this paper is to study two classes of discontinuous control problems without any convexity assumption on the dynamics. In the first part we characterize the value function for the Mayer problem and the supremum cost problem using viscosity tools and the notion of ε-viability (near viability). These value functions are given with respect to discontinuous cost functionals. In the second part we obtain results describing the ε-viability (near viability) of singularly perturbed control systems.

Introduction

We investigate two classes of control problems for which no convexity assumption is required on the dynamics. Firstly, we provide viscosity characterizations for the value function in both Mayer and supremum cost setting, whenever the cost functional is discontinuous. Secondly, we obtain ε-viability (near viability) properties for a singularly perturbed control system.

We begin by characterizing ε-viability (with or without constraints) via some associated L-control problem. A closed set of constraints K is said to be viable (or ε-viable) with respect to some dynamical control system if, starting from K one is able to find suitable controls keeping the trajectory in K (or, at least in some arbitrarily small neighborhood of the set of constraints). Viability properties have been extensively studied, starting from the pioneer work of Nagumo [1]. The methods used to describe this property rely either on the Bouligand–Severi contingent cone [2], [3] or on viscosity solutions [4], [5], [6], [7]. The later approach consists in expressing the viability property by means of an optimal control problem. In the literature, the value function is given as a running cost involving the distance to the set of constraints (see [8], [9]). This method works if the dynamics is convex. In this case, one is able to find an optimal control and the associated solution is viable. Since we will be dealing with non-convex dynamics, optimal control for such a problem might not exist. In this setting, the running cost gives little information on the actual distance to the set of constraints at some specific time horizon. Motivated by these elements, we propose to consider that the value function is given with respect to a supremum cost. Our approach allows us to avoid relaxation techniques and seems to be the natural way to characterize the ε-viability. Moreover, this formulation gives a better illustration of what can be expected from the ε-viability property at any time horizon. In general, L control problems lead to strongly non-linear Hamilton–Jacobi equations for which classical solutions may not and generally do not exist [10], [11], [12]. Using analytic tools of the theory of viscosity solutions for Hamilton–Jacobi variational inequalities, we give a geometric criterion describing (ε)-viability domains. As expected, this condition involves the classical first-order normal cone (see [2], [3] for a different proof for convex dynamics).

As an application of this criterion, we characterize the value function of a Mayer and supremum type problem in the discontinuous framework. In the convex case, this problem has been solved using viability theory for instance in [13], [10], [14], [15], [12]. The main difficulties in our setting are the lack of convexity in the dynamics and the discontinuity of the costs. We overcome this by considering an ε-viability approach and by modifying the definition of the value functions. We are able to consider several cases when the value function is given with respect to either bounded or semicontinuous cost functions. Illustrating examples are given in order to clarify how the value function should be defined for lower semicontinuous cost. The use of ε-viability methods allows us to avoid relaxation techniques of the control in the dynamics.

In the second part of our paper, we consider a singularly perturbed control problem. As far as the authors know, viability of singularly perturbed control problems is a rather new problem. To our best knowledge, this property has been considered in [16]. This method relies on the averaging method. Using the characterization of ε-viable domains via L control problems, we are able to considerably simplify the proof of the main result in [16] and to generalize it to the case with constraints.

This paper is organized as follows: in the first section we aim at characterizing the value functions for some non-convex control problems with discontinuous cost. We begin by giving a criterion for ε-viability by means of first-order normal cones (Section 2.1). Using this result, we are able to provide characterizations for the value function of a Mayer problem (Section 2.2) and supremum type problem (Section 2.3) in the discontinuous framework. Section 2.4 proposes a criterion for ε-viability under additional constraints. The third section deals with viability properties of a singularly perturbed control problem. We treat this problem in both the classical setting and in the case with additional constraints.

Section snippets

Control problems with discontinuous cost

We are dealing with the following control system {dxtt0,x0,u=f(t,xtt0,x0,u,ut)dt,t0tT,xt0t0,x0,u=x0RN, where T>0 is a finite time horizon, t0[0,T] and the control u takes its values in a compact, metric space U. We recall that a control (ut)t0tT is said to be admissible on [t0,T] if it is Lebesgue-measurable on [t0,T] and we let U denote the family of all admissible controls on [0,T]. We assume that the coefficient function f:R+×RN×URN satisfies {f is continuous on R+×RN×U,|f(t,x,u)|c(1+

Singularly perturbed control systems

In this section we simplify the proof of the main result in [16] and generalize it to the case with constraints. Our method is based on the previous results and the averaging method. We consider the following dynamics: {dxst,x,y,u=f(xst,x,y,u,yst,x,y,u,us)ds,εdyst,x,y,u=g(xst,x,y,u,yst,x,y,u,us)ds, for all st, where (t,x,y)[0,)×RM×RN and ε is a small real parameter. The evolutions of the two state variables x and y of the system are of different scale. We call x the “slow” variable and y the

References (26)

  • M. Bardi et al.

    Optimal control and viscosity solutions of Hamilton–Jacobi–Bellman equations

  • G. Barles

    Solutions de viscosité des équations de Hamilton–Jacobi. (Viscosity solutions of Hamilton–Jacobi equations)

  • L.C. Evans

    Partial differential equations

  • Cited by (0)

    View full text