ISSN:

1078-0947

eISSN:

1553-5231

All Issues

## Discrete & Continuous Dynamical Systems - A

September 2015 , Volume 35 , Issue 9

Special issue on optimal control and related fields

Select all articles

Export/Reference:

2015, 35(9): i-iv
doi: 10.3934/dcds.2015.35.9i

*+*[Abstract](1112)*+*[PDF](131.6KB)**Abstract:**

This special volume gathers a number of new contributions addressing various topics related to the field of optimal control theory and sensitivity analysis. The field has a rich and varied mathematical theory, with a long tradition and a vibrant body of applications. It has attracted a growing interest across the last decades, with the introduction of new ideas and techniques, and thanks to various new applications.

For more information please click the “Full Text” above.

2015, 35(9): 3879-3900
doi: 10.3934/dcds.2015.35.3879

*+*[Abstract](1358)*+*[PDF](1161.8KB)**Abstract:**

We discuss the system of Fokker-Planck and Hamilton-Jacobi-Bellman equations arising from the finite horizon control of McKean-Vlasov dynamics. We give examples of existence and uniqueness results. Finally, we propose some simple models for the motion of pedestrians and report about numerical simulations in which we compare mean filed games and mean field type control.

2015, 35(9): 3901-3931
doi: 10.3934/dcds.2015.35.3901

*+*[Abstract](1208)*+*[PDF](628.5KB)**Abstract:**

A basic question for zero-sum repeated games consists in determining whether the mean payoff per time unit is independent of the initial state. In the special case of ``zero-player'' games, i.e., of Markov chains equipped with additive functionals, the answer is provided by the mean ergodic theorem. We generalize this result to repeated games. We show that the mean payoff is independent of the initial state for all state-dependent perturbations of the rewards if and only if an ergodicity condition is verified. The latter is characterized by the uniqueness modulo constants of nonlinear harmonic functions (fixed points of the recession function associated to the Shapley operator), or, in the special case of stochastic games with finite action spaces and perfect information, by a reachability condition involving conjugate subsets of states in directed hypergraphs. We show that the ergodicity condition for games only depends on the support of the transition probability, and that it can be checked in polynomial time when the number of states is fixed.

2015, 35(9): 3933-3964
doi: 10.3934/dcds.2015.35.3933

*+*[Abstract](1663)*+*[PDF](694.1KB)**Abstract:**

This work deals with numerical approximations of unbounded and discontinuous value functions associated to some stochastic control problems. We derive error estimates for monotone schemes based on a Semi-Lagrangian method (or more generally in the form of a Markov chain approximation). A motivation of this study consists in approximating chance-constrained reachability sets. The latters will be characterized as level sets of a discontinuous value function associated to an adequate stochastic control problem. A precise analysis of the level-set approach is carried out and some numerical simulations are given to illustrate the approach.

2015, 35(9): 3965-3988
doi: 10.3934/dcds.2015.35.3965

*+*[Abstract](1277)*+*[PDF](501.8KB)**Abstract:**

We consider the short time behaviour of stochastic systems affected by a stochastic volatility evolving at a faster time scale. We study the asymptotics of a logarithmic functional of the process by methods of the theory of homogenization and singular perturbations for fully nonlinear PDEs. We point out three regimes depending on how fast the volatility oscillates relative to the horizon length. We prove a large deviation principle for each regime and apply it to the asymptotics of option prices near maturity.

2015, 35(9): 3989-4017
doi: 10.3934/dcds.2015.35.3989

*+*[Abstract](1336)*+*[PDF](504.1KB)**Abstract:**

We consider state constrained optimal control problems in which the cost to minimize comprises an $L^\infty$ functional, i.e. the maximum of a running cost along the trajectories. In absence of state constraints, a new approach has been suggested by a recent paper [9]. The main purpose of the present paper is to extend this approach and the related results to state constrained $L^\infty$ optimal control problems. More precisely, using the $(L^\infty, L^1)$-duality, the reference optimal control problem can be seen as a

*static differential game*, in which an extra variable is introduced and plays the role of an opponent player who wants to

*maximize*the cost. Under appropriate assumptions and employing suitable Filippov's type results, this static game turns out to be equivalent to the corresponding

*dynamic differential game*, whose (upper) value function is the unique viscosity solution to a

*constrained boundary value problem*, which involves a Hamilton-Jacobi equation with a

*continuous*Hamiltonian.

2015, 35(9): 4019-4039
doi: 10.3934/dcds.2015.35.4019

*+*[Abstract](1239)*+*[PDF](1934.1KB)**Abstract:**

We present a novel method to compute Lyapunov functions for continuous-time systems with multiple local attractors. In the proposed method one first computes an outer approximation of the local attractors using a graph-theoretic approach. Then a candidate Lyapunov function is computed using a Massera-like construction adapted to multiple local attractors. In the final step this candidate Lyapunov function is interpolated over the simplices of a simplicial complex and, by checking certain inequalities at the vertices of the complex, we can identify the region in which the Lyapunov function is decreasing along system trajectories. The resulting Lyapunov function gives information on the qualitative behavior of the dynamics, including lower bounds on the basins of attraction of the individual local attractors. We develop the theory in detail and present numerical examples demonstrating the applicability of our method.

2015, 35(9): 4041-4070
doi: 10.3934/dcds.2015.35.4041

*+*[Abstract](1343)*+*[PDF](750.0KB)**Abstract:**

We present an abstract convergence result for the fixed point approximation of stationary Hamilton--Jacobi equations. The basic assumptions on the discrete operator are invariance with respect to the addition of constants, $\epsilon$-monotonicity and consistency. The result can be applied to various high-order approximation schemes which are illustrated in the paper. Several applications to Hamilton--Jacobi equations and numerical tests are presented.

2015, 35(9): 4071-4094
doi: 10.3934/dcds.2015.35.4071

*+*[Abstract](1250)*+*[PDF](1758.9KB)**Abstract:**

We study the problem of consensus emergence in multi-agent systems via external feedback controllers. We consider a set of agents interacting with dynamics given by a Cucker-Smale type of model, and study its consensus stabilization by means of centralized and decentralized control configurations. We present a characterization of consensus emergence for systems with different feedback structures, such as leader-based configurations, perturbed information feedback, and feedback computed upon spatially confined information. We characterize consensus emergence for this latter design as a parameter-dependent transition regime between self-regulation and centralized feedback stabilization. Numerical experiments illustrate the different features of the proposed designs.

2015, 35(9): 4095-4114
doi: 10.3934/dcds.2015.35.4095

*+*[Abstract](1046)*+*[PDF](782.0KB)**Abstract:**

The objective of this article is to analyze the integrability properties of extremal solutions of Pontryagin Maximum Principle in the time minimal control of a linear spin system with Ising coupling in relation with conjugate and cut loci computations. Restricting to the case of three spins, the problem is equivalent to analyze a family of almost-Riemannian metrics on the sphere $S^{2}$, with Grushin equatorial singularity. The problem can be lifted into a SR-invariant problem on $SO(3)$, this leads to a complete understanding of the geometry of the problem and to an explicit parametrization of the extremals using an appropriate chart as well as elliptic functions. This approach is compared with the direct analysis of the Liouville metrics on the sphere where the parametrization of the extremals is obtained by computing a Liouville normal form. Finally, an algebraic approach is presented in the framework of the application of differential Galois theory to integrability.

2015, 35(9): 4115-4147
doi: 10.3934/dcds.2015.35.4115

*+*[Abstract](1098)*+*[PDF](4000.0KB)**Abstract:**

A 3D almost-Riemannian manifold is a generalized Riemannian manifold defined locally by 3 vector fields that play the role of an orthonormal frame, but could become collinear on some set $\mathcal{Z}$ called the singular set. Under the Hörmander condition, a 3D almost-Riemannian structure still has a metric space structure, whose topology is compatible with the original topology of the manifold. Almost-Riemannian manifolds were deeply studied in dimension 2.

In this paper we start the study of the 3D case which appears to be richer with respect to the 2D case, due to the presence of abnormal extremals which define a field of directions on the singular set. We study the type of singularities of the metric that could appear generically, we construct local normal forms and we study abnormal extremals. We then study the nilpotent approximation and the structure of the corresponding small spheres.

We finally give some preliminary results about heat diffusion on such manifolds.

2015, 35(9): 4149-4171
doi: 10.3934/dcds.2015.35.4149

*+*[Abstract](1263)*+*[PDF](522.0KB)**Abstract:**

The paper studies a class of conservation law models for traffic flow on a family of roads, near a junction. A Riemann Solver is constructed, where the incoming and outgoing fluxes depend Hölder continuously on the traffic density and on the drivers' turning preferences. However, various examples show that, if junction conditions are assigned in terms of Riemann Solvers, then the Cauchy problem on a network of roads can be ill posed, even for initial data having small total variation.

2015, 35(9): 4173-4192
doi: 10.3934/dcds.2015.35.4173

*+*[Abstract](1340)*+*[PDF](655.3KB)**Abstract:**

In [14], Guéant, Lasry and Lions considered the model problem ``What time does meeting start?'' as a prototype for a general class of optimization problems with a continuum of players, called Mean Field Games problems. In this paper we consider a similar model, but with the dynamics of the agents defined on a network. We discuss appropriate transition conditions at the vertices which give a well posed problem and we present some numerical results.

2015, 35(9): 4193-4223
doi: 10.3934/dcds.2015.35.4193

*+*[Abstract](1804)*+*[PDF](1064.2KB)**Abstract:**

In recent years, much effort in designing numerical methods for the simulation and optimization of mechanical systems has been put into schemes which are structure preserving. One particular class are variational integrators which are momentum preserving and symplectic. In this article, we develop two high order variational integrators which distinguish themselves in the dimension of the underling space of approximation and we investigate their application to finite-dimensional optimal control problems posed with mechanical systems. The convergence of state and control variables of the approximated problem is shown. Furthermore, by analyzing the adjoint systems of the optimal control problem and its discretized counterpart, we prove that, for these particular integrators, dualization and discretization commute.

2015, 35(9): 4225-4239
doi: 10.3934/dcds.2015.35.4225

*+*[Abstract](1180)*+*[PDF](404.6KB)**Abstract:**

We investigate the properties of the set of singularities of semiconcave solutions of Hamilton-Jacobi equations of the form \begin{equation}\label{abstract:EQ} u_t(t,x)+H(\nabla u(t,x))=0, \qquad\text{a.e. }(t,x)\in (0,+\infty)\times\Omega\subset\mathbb{R}^{n+1}\,. \end{equation} It is well known that the singularities of such solutions propagate locally along generalized characteristics. Special generalized characteristics, satisfying an energy condition, can be constructed, under some assumptions on the structure of the Hamiltonian $H$. In this paper, we provide estimates of the dissipative behavior of the energy along such curves. As an application, we prove that the singularities of any viscosity solution of (1) cannot vanish in a finite time.

2015, 35(9): 4241-4268
doi: 10.3934/dcds.2015.35.4241

*+*[Abstract](1723)*+*[PDF](3108.4KB)**Abstract:**

In this paper we present a model for opinion dynamics on the $d$-dimensional sphere based on classical consensus algorithms. The choice of the model is motivated by the analysis of the comprehensive literature on the subject, both from the mathematical and the sociological point of views. The resulting dynamics is highly nonlinear and therefore presents a rich structure. Equilibria and asymptotic behavior are then analysed and sufficient condition for consensus are established. Finally we address global stabilization and controllability.

2015, 35(9): 4269-4292
doi: 10.3934/dcds.2015.35.4269

*+*[Abstract](1171)*+*[PDF](992.4KB)**Abstract:**

In this paper we study a fully discrete Semi-Lagrangian approximation of a second order Mean Field Game system, which can be degenerate. We prove that the resulting scheme is well posed and, if the state dimension is equals to one, we prove a convergence result. Some numerical simulations are provided, evidencing the convergence of the approximation and also the difference between the numerical results for the degenerate and non-degenerate cases.

2015, 35(9): 4293-4322
doi: 10.3934/dcds.2015.35.4293

*+*[Abstract](1143)*+*[PDF](1409.1KB)**Abstract:**

We give sufficient conditions to reach a target for a suitable discretization of a control affine nonlinear dynamics. Such conditions involve higher order Lie brackets of the vector fields driving the state and so the discretization method needs to be of a suitably high order as well. As a result, the discrete minimal time function is bounded by a fractional power of the distance to the target of the initial point. This allows to use methods based on Hamilton-Jacobi theory to prove the convergence of the solution of a fully discrete scheme to the (true) minimum time function, together with error estimates. Finally, we design an approximate suboptimal discrete feedback and provide an error estimate for the time to reach the target through the discrete dynamics generated by this feedback. Our results make use of ideas appearing for the first time in [3] and now extensively described in [12]. Numerical examples are presented.

2015, 35(9): 4323-4343
doi: 10.3934/dcds.2015.35.4323

*+*[Abstract](1286)*+*[PDF](451.6KB)**Abstract:**

We study an optimal control problem with Volterra-type integral equation, considered on a nonfixed time interval, subject to endpoint constraints of equality and inequality type. We obtain first-order necessary optimality conditions for an extended weak minimum, the notion of which is a natural generalization of the notion of weak minimum with account of variations of the time. The conditions obtained generalize the Euler--Lagrange equation and transversality conditions for the Lagrange problem in the classical calculus of variations with ordinary differential equations.

2015, 35(9): 4345-4366
doi: 10.3934/dcds.2015.35.4345

*+*[Abstract](954)*+*[PDF](571.7KB)**Abstract:**

If $f_1,f_2$ are smooth vector fields on an open subset of an Euclidean space and $[f_1,f_2]$ is their Lie bracket, the asymptotic formula \begin{equation}\label{abstract:EQ} \Psi_{[f_1,f_2]}(t_1,t_2)(x) - x =t_1t_2 [f_1,f_2](x) +o(t_1t_2), \, (1) \end{equation} where we have set $\Psi_{[f_1,f_2]}(t_1,t_2)(x) \overset{\underset{\mathrm{def}}{}}{=} \exp(-t_2 f_2)\circ \exp(-t_1f_1) \circ \exp(t_2f_2) \circ \exp(t_1f_1)(x)$, is valid for all $t_1,t_2$ small enough. In fact, the integral, exact formula \begin{equation}\label{abstract:EQ} \Psi_{[f_1,f_2]}(t_1,t_2)(x) - x = \int_0^{t_1}\int_0^{t_2}[f_1,f_2]^{(s_2,s_1)} (\Psi(t_1,s_2)(x))ds_1\,ds_2 , (2) \end{equation} where $[f_1,f_2]^{(s_2,s_1)}(y) \overset{\underset{\mathrm{def}}{}}{=} D (\exp(s_1f_1) \circ \exp(s_2f_2)))^{-1}(y) \cdot [f_1,f_2](\exp (s_1f_1) \circ \exp(s_2f_2)(y) ), $ has also been proven. Of course (2) can be regarded as an improvement of (1). In this paper we show that an integral representation like (2) holds true for any iterated Lie bracket made of elements of a family ${f_1,\dots,f_m}$ of vector fields. In perspective, these integral representations might lie at the basis for extensions of asymptotic formulas involving non-smooth vector fields.

2015, 35(9): 4367-4384
doi: 10.3934/dcds.2015.35.4367

*+*[Abstract](1518)*+*[PDF](459.2KB)**Abstract:**

The paper is devoted to the $BV$-relaxation of a dynamical system, whose right-hand side is a $p$th degree polynomial with rational powers of control under a uniform bound on its $L_p$-norm, and coefficients containing usual measurable bounded control.

Under natural convexity assumptions, we give an explicit representation of generalized solutions to the control system by a measure differential equation. The main results concern an optimal impulsive control problem for the relaxed system: We establish the existence of a minimizer, and give necessary optimality conditions in the form of a Maximum Principle.

2015, 35(9): 4385-4414
doi: 10.3934/dcds.2015.35.4385

*+*[Abstract](1266)*+*[PDF](569.8KB)**Abstract:**

We consider a model predictive control approach to approximate the solution of infinite horizon optimal control problems for perturbed nonlinear discrete time systems. By reducing the number of re-optimizations, the computational load can be lowered considerably at the expense of reduced robustness of the closed-loop solution against perturbations. In this paper, we propose and analyze an update strategy based on re-optimizations on shrinking horizons which is computationally less expensive than that based on full horizon re-optimization, and at the same time allowing for rigorously quantifiable robust performance estimates.

2015, 35(9): 4415-4437
doi: 10.3934/dcds.2015.35.4415

*+*[Abstract](1514)*+*[PDF](475.5KB)**Abstract:**

This paper is concerned with state-constrained discontinuous ordinary differential equations for which the corresponding vector field has a set of singularities that forms a stratification of the state domain. Existence of solutions and robustness with respect to external perturbations of the righthand term are investigated. Moreover, notions of regularity for stratifications are discussed.

2015, 35(9): 4439-4453
doi: 10.3934/dcds.2015.35.4439

*+*[Abstract](1271)*+*[PDF](2628.1KB)**Abstract:**

We propose a discretization of the optimality principle in dynamic programming based on radial basis functions and Shepard's moving least squares approximation method. We prove convergence of the value iteration scheme, derive a statement about the stability region of the closed loop system using the corresponding approximate optimal feedback law and present several numerical experiments.

2015, 35(9): 4455-4475
doi: 10.3934/dcds.2015.35.4455

*+*[Abstract](1189)*+*[PDF](476.1KB)**Abstract:**

Dynamic optimization problems for differential inclusions on manifolds are considered. A mathematical framework for derivation of optimality conditions for generalized dynamical systems is proposed. We obtain optimality conditions in form of generalized Euler-Lagrange relations and in form of partially convexified Hamiltonian inclusions by using metric regularity of terminal and dynamic constraints.

2015, 35(9): 4477-4501
doi: 10.3934/dcds.2015.35.4477

*+*[Abstract](1342)*+*[PDF](1021.2KB)**Abstract:**

We consider proper orthogonal decomposition (POD) based Galerkin approximations to parabolic systems and establish uniform convergence with respect to forcing functions. The result is used to prove convergence of POD approximations to optimal control problems that automatically update the POD basis in order to avoid problems due to unmodeled dynamics in the POD reduced order system. A numerical example illustrates the results.

2015, 35(9): 4503-4525
doi: 10.3934/dcds.2015.35.4503

*+*[Abstract](1323)*+*[PDF](912.4KB)**Abstract:**

Much recent climate research suggests that the transition from non-renewable to renewable energy should be expedited. To address this issue we use an optimal control model, based on an integrated assessment model of climate change that includes two types energy production. After setting up the model, we derive necessary optimality conditions in the form of a Pontryagin type Maximum Principle. We use a numerical discretization method for optimal control problems to explore various policy scenarios. The algorithm allows to compute both state and co-state variables by providing a consistent numerical approximation for the adjoint variables of the various scenarios. Our numerical method applies to control and state-constrained control problems as well as to delayed control problems. In the policy scenarios, we explore ways how the transition from non-renewable to renewable energy can be expedited.

2015, 35(9): 4527-4552
doi: 10.3934/dcds.2015.35.4527

*+*[Abstract](1122)*+*[PDF](537.8KB)**Abstract:**

In this paper we give a representation formula for the limit of the finite horizon problem as the horizon becomes infinite, with a nonnegative Lagrangian and unbounded data. It is related to the limit of the discounted infinite horizon problem, as the discount factor goes to zero. We give sufficient conditions to characterize the limit function as unique nonnegative solution of the associated HJB equation. We also briefly discuss the ergodic problem.

2015, 35(9): 4553-4572
doi: 10.3934/dcds.2015.35.4553

*+*[Abstract](1916)*+*[PDF](749.3KB)**Abstract:**

When using direct methods to solve continuous-time nonlinear optimal control problems, regular time meshes having equidistant spacing are most frequently used. However, in some cases, these meshes cannot cope accurately with nonlinear behaviour and increasing uniformly the number of mesh nodes may lead to a more complex problem. We propose an adaptive time--mesh refinement algorithm, considering different levels of refinement and several mesh refinement criteria. Namely, we use information of the adjoint multipliers to decide where to refine further. This technique is here tested to solve two optimal control problems. One involving nonholonomic vehicles with state constraints which is characterized by having strong nonlinearities and by discontinuous controls; the other is also a nonlinear problem of a compartmental SEIR system. The proposed strategy leads to results with higher accuracy and yet with lower overall computational time, when compared to results obtained by meshes having equidistant spacing. We also apply the necessary condition of optimality in the form of the Maximum Principle of Pontryagin to characterize the solution and to validate the numerical results.

2015, 35(9): 4573-4592
doi: 10.3934/dcds.2015.35.4573

*+*[Abstract](1165)*+*[PDF](444.6KB)**Abstract:**

Relaxation refers to the procedure of enlarging the domain of a variational problem or the search space for the solution of a set of equations, to guarantee the existence of solutions. In optimal control theory relaxation involves replacing the set of permissible velocities in the dynamic constraint by its convex hull. Usually the infimum cost is the same for the original optimal control problem and its relaxation. But it is possible that the relaxed infimum cost is strictly less than the infimum cost. It is important to identify such situations, because then we can no longer study the infimum cost by solving the relaxed problem and evaluating the cost of the relaxed minimizer. Following on from earlier work by Warga, we explore the relation between the existence of an infimum gap and abnormality of necessary conditions (i.e. they are valid with the cost multiplier set to zero). Two kinds of theorems are proved. One asserts that a local minimizer, which is not also a relaxed minimizer, satisfies an abnormal form of the Pontryagin Maximum Principle. The other asserts that a local relaxed minimizer that is not also a minimizer satisfies an abnormal form of the relaxed Pontryagin Maximum Principle.

2015, 35(9): 4593-4610
doi: 10.3934/dcds.2015.35.4593

*+*[Abstract](1274)*+*[PDF](390.6KB)**Abstract:**

We extend the DuBois--Reymond necessary optimality condition and Noether's first theorem to variational problems of Herglotz type with time delay. Our results provide, as corollaries, the DuBois--Reymond necessary optimality condition and the first Noether theorem for variational problems with time delay recently proved in [Numer. Algebra Control Optim. 2 (2012), no. 3, 619--630]. Our main result is also a generalization of the first Noether-type theorem for the generalized variational principle of Herglotz proved in [Topol. Methods Nonlinear Anal. 20 (2002), no. 2, 261--273].

2015, 35(9): 4611-4638
doi: 10.3934/dcds.2015.35.4611

*+*[Abstract](1266)*+*[PDF](704.7KB)**Abstract:**

Optimal control problems with fixed terminal time are considered for multi-input bilinear systems with the control set given by a compact interval and the objective function affine in the controls. Systems of this type have been widely used in the modeling of cell-cycle specific cancer chemotherapy over a prescribed therapy horizon for both homogeneous and heterogeneous tumor populations. Necessary conditions for optimality lead to concatenations of bang and singular controls as prime candidates for optimality. In this paper, the method of characteristics will be formulated as a general procedure to embed such a controlled reference extremal into a field of broken extremals. Sufficient conditions for the strong local optimality of a controlled reference bang-bang trajectory will be formulated in terms of solutions to associated sensitivity equations. These results will be applied to a model for cell cycle specific cancer chemotherapy with cytotoxic and cytostatic agents.

2015, 35(9): 4639-4663
doi: 10.3934/dcds.2015.35.4639

*+*[Abstract](1860)*+*[PDF](692.3KB)**Abstract:**

We propose a population model for TB-HIV/AIDS coinfection transmission dynamics, which considers antiretroviral therapy for HIV infection and treatments for latent and active tuberculosis. The HIV-only and TB-only sub-models are analyzed separately, as well as the TB-HIV/AIDS full model. The respective basic reproduction numbers are computed, equilibria and stability are studied. Optimal control theory is applied to the TB-HIV/AIDS model and optimal treatment strategies for co-infected individuals with HIV and TB are derived. Numerical simulations to the optimal control problem show that non intuitive measures can lead to the reduction of the number of individuals with active TB and AIDS.

2015, 35(9): 4665-4681
doi: 10.3934/dcds.2015.35.4665

*+*[Abstract](1344)*+*[PDF](429.9KB)**Abstract:**

In this paper we provide a design methodology to compute control strategies for a primary dynamical system which is operating in a domain where other dynamical systems are present and the interactions between these systems and the primary one are of interest or are being pursued in some sense. The information from the other systems is available to the primary dynamical system only at discrete time instances and is assumed to be corrupted by noise. Having available only this limited and somewhat corrupted information which depends on the noise, the primary system has to make a decision based on the estimated behavior of other systems which may range from cooperative to noncooperative. This decision is reflected in a design of the most appropriate action, that is, control strategy of the primary system. The design is illustrated by considering some particular collision avoidance problem scenarios.

2018 Impact Factor: 1.143

## Readers

## Authors

## Editors

## Referees

## Librarians

## More

## Email Alert

Add your name and e-mail address to receive news of forthcoming issues of this journal:

[Back to Top]