Controlled mean-field backward stochastic differential equations with jumps involving the value function
DOI10.1007/s11424-016-4275-5zbMath1380.93290OpenAlexW2403383299MaRDI QIDQ1691939
Publication date: 25 January 2018
Published in: Journal of Systems Science and Complexity (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11424-016-4275-5
value functionPoisson random measureHamilton-Jacobi-Bellman (HJB) equationdynamic programming principle (DPP)meanfield backward stochastic differential equation (mean-field BSDE) with jump
Stochastic ordinary differential equations (aspects of stochastic analysis) (60H10) Dynamic programming in optimal control and differential games (49L20) Optimal stochastic control (93E20) Viscosity solutions to Hamilton-Jacobi equations in optimal control and differential games (49L25)
Related Items (7)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Adapted solution of a backward stochastic differential equation
- Stochastic representation for solutions of Isaacs' type integral-partial differential equations
- A general stochastic maximum principle for SDEs of mean-field type
- Mean-field backward stochastic differential equations and related partial differential equations
- Mean-field backward stochastic differential equations: A limit approach
- Second-order elliptic integro-differential equations: viscosity solutions' theory revisited
- Mean field games
- Stochastic optimization theory of backward stochastic differential equations with jumps and viscosity solutions of Hamilton-Jacobi-Bellman equations
- Efficient and equilibrium allocations with stochastic differential utility
- Backward stochastic differential equations coupled with value function and related optimal control problems
- Stochastic maximum principle in the mean-field controls
- On the stochastic maximum principle. Fixed time of control
- Backward stochastic differential equations and integral-partial differential equations
- A General Stochastic Maximum Principle for Optimal Control Problems
- Stochastic Differential Games and Viscosity Solutions of Hamilton–Jacobi–Bellman–Isaacs Equations
- Probabilistic interpretation for systems of quasilinear parabolic partial differential equations
- A Generalized dynamic programming principle and hamilton-jacobi-bellman equation
- Stochastic Differential Utility
- User’s guide to viscosity solutions of second order partial differential equations
- Necessary Conditions for Optimal Control of Stochastic Systems with Random Jumps
- Backward Stochastic Differential Equations in Finance
- Stochastic Near-Optimal Controls: Necessary and Sufficient Conditions for Near-Optimality
- Ambiguity, Risk, and Asset Returns in Continuous Time
- Necessary Conditions for Continuous Parameter Stochastic Optimization Problems
This page was built for publication: Controlled mean-field backward stochastic differential equations with jumps involving the value function