Consensus using Min-sum Algorithms

October 22, 2009 by
\documentclass{report}
\usepackage{amsmath}
\begin{document}
Consider the problem of minimization of some cost function defined on the vertices of an undirected graph $(V,E)$ in a decentralized way. Each vertex in $V$ has an associated variable $x_i$ which can take values in a set $\mathcal{X}_i$. $\boldsymbol{x}$ is the vector of decision variables. The optimization problem is defined to be
\[\min_{x}F(x):=\sum_{i\in V}f_i(x_i)+\sum_{(i,j)\in E}f_{ij}(x_i,x_j),\]
\[\text{subject to }x_i\in\mathcal{X}_i.\]
This is called the \emph{pairwise graphical model} because each term involves at most a pair of decision variables. One of the methods to solve this problem in a decentralized way is called \emph{message passing} or \emph{min-sum algorithm}. The idea of the algorithm is that any vertex can compute an estimate of the minimum cost given the corresponding estimates of its neighbors. This is basically like solving a dynamic program for any of the vertices in which discrete time is replaced by hop distance. To see how it is similar to dynamic programming consider the simplest case of a chain of $n$ vertices. The optimization problem can be rewritten as
\[
\min_{x_i\in\mathcal{X}_i}f_i(x_i)+\boldsymbol{1}_{\{i>1\}}J^{\ast}_{i-1\to i}(x_i)+\boldsymbol{1}_{\{i<n\}}J^{\ast}_{i+1\to i}(x_i),
\]
where $\boldsymbol{1}$ is the indicator function and
\[J^{\ast}_{i-1\to i}(x_i):=\min_{x_1,…,x_{i-1}}\sum_{j=1}^{i-1}f_j(x_j)+\sum_{j=1}^{i-1}f_{j,j+1}(x_j,x_{j+1}),\quad \forall 1<i\leq n,
\]
and
\[J^{\ast}_{i-1\to i}(x_i):=\min_{x_{i+1},…,x_n}\sum_{j=i+1}^{n}f_j(x_j)+\sum_{j=i}^{n-1}f_{j,j+1}(x_j,x_{j+1}),\quad \forall 1<i\leq n.
\]
The “cost-to-go” functions $J^{\ast}_{i-1\to i}(.)$ and $J^{\ast}_{i+1\to i}(.)$ show the effect of decision at vertex $i$ on the cost function of its neighbors on the left and right respectively. Each vertex iteratively updates his cost-to-go function using the cost-to-go function of its neighbors only. This is an implementation of the deterministic dynamic programming algorithm.
The case of a simply connected graph is not much harder. Each vertex now has to incorporate the cost-to-go function from all its neighbors in his minimization problem
\[\min_{x_i\in\mathcal{X}_i}f_i(x_i)+\sum_{u\in N(i)}J^{\ast}_{u\to i}(x_i),\]
where now $N(i)$ denotes the set of all neighbors of vertex $i$ and $J^{\ast}_{j\to i}(x_i)$ is the cost-to-go from vertex $j$ to vertex $i$. Again this is just a deterministic dynamic program which terminates with the number of iterations equal to the diameter of the graph (which corresponds to the time horizon in DP).
Cycles cause difficulty by creating non terminating recursions and paths of infinite length which make the dynamic program infinite horizon in nature. But according to [1] the min-sum algorithms can also be applied to graphs with cycles using the appropriate normalization constants in the cost-to-go function.\\
One of the most important applications of min-sum algorithms is the consensus problem where the vertices try to average a set of numbers in a distributed way. This problem can be transformed to the above mentioned problem by defining the decision variables as the believes of the vertices and the cost function as some measure of deviation from the mean.\\
In this project I want to apply approximate methods and finite horizon relaxations to solve the consensus problem using the min-sum algorithm in the deterministic case, and then try to investigate the effect of introducing uncertainties on the performance of the algorithm. I will also make comparison between the performance of this algorithm and the existing consensus algorithms, such as linear consensus.\\
[1] C.C. Moallemi. \emph{A Message-Passing Paradigm for Optimization.} Ph.D. Thesis, Stanford University, Stanford, CA, September 2007.
\end{document}

Consider the problem of minimization of some cost function defined on the vertices of an undirected graph (V,E) in a decentralized way. Each vertex in V has an associated variable x_i which can take values in a set \mathcal{X}_i. \boldsymbol{x} is the vector of decision variables. The optimization problem is defined to be

\min_{x}F(x):=\sum_{i\in V}f_i(x_i)+\sum_{(i,j)\in E}f_{ij}(x_i,x_j),

\text{subject to }x_i\in\mathcal{X}_i.

This is called the pairwise graphical model because each term involves at most a pair of decision variables. One of the methods to solve this problem in a decentralized way is called message passing or min-sum algorithm. The idea of the algorithm is that any vertex can compute an estimate of the minimum cost given the corresponding estimates of its neighbors. This is basically like solving a dynamic program for any of the vertices in which discrete time is replaced by hop distance. To see how it is similar to dynamic programming consider the simplest case of a chain of n vertices. The optimization problem can be rewritten as

\min_{x_i\in\mathcal{X}_i}f_i(x_i)+\boldsymbol{1}_{\{i>1\}}J^{\ast}_{i-1\to i}(x_i)+\boldsymbol{1}_{\{i<n\}}J^{\ast}_{i+1\to i}(x_i),

where \boldsymbol{1} is the indicator function and

J^{\ast}_{i-1\to i}(x_i):=\min_{x_1,...,x_{i-1}}\sum_{j=1}^{i-1}f_j(x_j)+\sum_{j=1}^{i-1}f_{j,j+1}(x_j,x_{j+1}),\quad \forall 1<i\leq n,

and

J^{\ast}_{i-1\to i}(x_i):=\min_{x_{i+1},...,x_n}\sum_{j=i+1}^{n}f_j(x_j)+\sum_{j=i}^{n-1}f_{j,j+1}(x_j,x_{j+1}),\quad \forall 1<i\leq n.

The “cost-to-go” functions J^{\ast}_{i-1\to i}(.) and J^{\ast}_{i+1\to i}(.) show the effect of decision at vertex i on the cost function of its neighbors on the left and right respectively. Each vertex iteratively updates his cost-to-go function using the cost-to-go function of its neighbors only. This is an implementation of the deterministic dynamic programming algorithm.

The case of a simply connected graph is not much harder. Each vertex now has to incorporate the cost-to-go function from all its neighbors in his minimization problem

\min_{x_i\in\mathcal{X}_i}f_i(x_i)+\sum_{u\in N(i)}J^{\ast}_{u\to i}(x_i),

where now N(i) denotes the set of all neighbors of vertex i and J^{\ast}_{j\to i}(x_i) is the cost-to-go from vertex j to vertex i. Again this is just a deterministic dynamic program which terminates with the number of iterations equal to the diameter of the graph (which corresponds to the time horizon in DP).

Cycles cause difficulty by creating non terminating recursions and paths of infinite length which make the dynamic program infinite horizon in nature. But according to [1] the min-sum algorithms can also be applied to graphs with cycles using the appropriate normalization constants in the cost-to-go function.

One of the most important applications of min-sum algorithms is the consensus problem where the vertices try to average a set of numbers in a distributed way. This problem can be transformed to the above mentioned problem by defining the decision variables as the believes of the vertices and the cost function as some measure of deviation from the mean.

In this project I want to apply approximate methods and finite horizon relaxations to solve the consensus problem using the min-sum algorithm in the deterministic case, and then try to investigate the effect of introducing uncertainties on the performance of the algorithm. I will also make comparison between the performance of this algorithm and the existing consensus algorithms, such as linear consensus.

[1] C.C. Moallemi. A Message-Passing Paradigm for Optimization. Ph.D. Thesis, Stanford University, Stanford, CA, September 2007.

Advertisements

Optimal Networked Control – a DP Perspective

October 22, 2009 by

Problem Description

Consider a system where a plant is controlled using a multi-hop, fully synchronized wireless network, composed of nodes equipped with a radio transceiver, where each node shares a common sense of control application. The network is consisted of a set of controllers along with sensors and actuators interacting with the plant.

The plant’s dynamics can be described as a linear time-varying discrete system

x_{k+1} = A_{k} x_{k} + B_{k} u_{k},
y_{k} = C_{k} x_{k}.

To describe the network following syntax is used:

  • G=(V,E) is an undirected graph representing a radio connectivity in the network, with \left| V \right| = t.
  • \Omega:\mathbb{I}\cup \mathbb{O}\rightarrow V, where \mathbb{I} is a set of plant input signals mapped to actuators (\Omega(\mathbb{I})=A\subset V), while \mathbb{O} is a set of output signals from the plant mapped to sensor nodes (\Omega(\mathbb{O})=S\subset V).

The goal of the project is to provide optimal control of a distributed scheme used to calculate plant’s control input. Proposed scheme consists of a linear iteration where, at each time-step, every node updates its value to be a weighted average of its own previous value and those of its neighbors. In addition since some of the nodes are connected to the plant’s sensors, a node’s update procedure will take into account current values from all sensors in its neighborhood. Since some of the nodes are connected to the actuators, plant input will be a linear combination of values of all nodes in actuators’ neighborhoods.

If we denote with z_{k} a vector consisted of internal values from all nodes in the network at time k, the update strategy for the network is:

z_{k+1} = W_{k}z_{k} + E_{k}y_{k}
u_{k} = F_{k}z_{k}

where the structures of the matrices E_k, F_k and W_k are determined by the network topology. Therefore the system evolution in each iteration can be presented as:

\hat{x}_{k+1}=\left[\begin{array}{c}  x_{k+1}\\  z_{k+1}\end{array}\right] = \left[\begin{array}{c c} A_{k} & B_{k} F_{k}\\ E_{k} C_{k} & W_{k} \end{array}\right] \left[\begin{array}{c}  x_{k}\\ z_{k}\end{array}\right] = \hat{A}_k \hat{x}_k

Our goal is to compute matrices E_k, F_k and W_k that minimize the quadratic cost (for a finite-horizon) defined as:

\sum_{k=0}^{N-1} \left \{ x_{k}^T Q_k x_k + u_{k}^T R_k x_k \right \} + x_{N}^T Q_N x_N =

\sum_{k=0}^{N-1} \hat{x}_{k}^T \left [ \begin{array}{cc} Q_{k} & 0 \\ 0 & F_k^T R_{k} F_k \end{array}\right]\hat{x}_{k}^T + x_{N}^T Q_N x_N

Using the DP algorithm we wish to compute the optimal control (in this case matrices E_k, F_k and W_k) where:

J_N^*(x_N) = x_N^T Q_N x_N

J_k^*(\hat{x}_k) = \min_{W_k,E_k,F_k} \Bigg \{ \hat{x}_k^T \begin{bmatrix} Q_k & 0 \\ 0 & F_k^T R_k F_k \end{bmatrix} \hat{x}_k

+ J_{k+1}^* \Bigg( \begin{bmatrix} A_k & B_k F_k \\ E_k C_k & W_k \end{bmatrix} \hat x_k \Bigg) \Bigg \}

Some simplification can be obtained if an assumption is made that matrices E_k, F_k are 0-1 matrices, therefore fully determined by the underlying network topology. In this case only a minimization over W_k is performed.

Since it most unlikely that an analytical solution of the problem can be obtained, focus of the project will be computational methods that can be used in order to determine the optimal linear iterative strategy.

Optimal Rate and Power Control for Layered Multicast in Cognitive Radio Networks

October 22, 2009 by

In layered multicast [1], signal (e.g. video or audio) is encoded into multiple layers with different rates and sent from one transmitter to an arbitrary number of receivers. Depending on the network condition, receivers can receive different numbers (all or only a portion) of the layers and obtain reconstructed signal with different qualities. Compared with single-rate multicast, layered multicast is more advantageous in heterogeneous networks because it can satisfy receivers with different requirements simultaneously so that the network can be utilized more efficiently.

A multicast session can be described by a tree in which the root and leaf nodes are the transmitter and receivers, respectively, while the internal nodes play roles of relaying [2]. Each link in the tree has a weight, representing the capacity of the corresponding link in real network. The goal of rate control in layered multicast is to determine the rates of all the internal and leaf nodes that achieves optimality according to certain criterion. When there is only one multicast session in the network, the rate control is simple: every internal node transmits to its child node the maximum number of layers it receives from its parent that doesn’t exceed the link capacity. However, problem becomes complicated when two or more multicast sessions in one network compete on shared links. In this case, we can define a utility function for received rates in all sessions and solve an optimization problem whose objective is to maximize the overall utility.

In [2], Kar and Tassiulas proposed a dual-based iterative algorithm using Lagrange relaxation to solve the rate control problem for layered multicast in wireline network (each link has a fixed capacity). Each iteration include two steps: link price update and session rates update. They further showed that the session rates update step can be solved by dynamic programming which greatly reduces the computation cost and allows the algorithm to be performed in a distributed manner. Indeed, because of the tree structure of multicast the session rates update step is a finite state space (because the number of layers is finite) deterministic dynamic programming problem with perfect state information

\displaystyle   J_i(x_k) = \sum_{i' \in B_i} \underset{x_{k'} \leq x_k}{\text{max}}\{ J_{i'}(x_{k'}) - c(x_{k'}) \}, \ \ \ \ \ (1)

where {x_k} is the rate of a node taking values from rates of different layers, {J_i(x_k)} is the maximum achievable utility of the sub-tree rooted at node {i} when node {i} has rate {x_k}, {B_i} is the set of node {i}‘s children and {c(\cdot)} is a cost function. Obviously, (1) is a typical form of dynamic programming algorithm with which we can recursively compute {J_i(x_k)} and finally obtain {J_0(x_k)}, the maximum achievable utility for the tree rooted at node 0, i.e. the multicast session.

When considering layered multicast in cognitive radio network, problem becomes more complicated because of the fact that the cognitive radio network is full of randomness:

  • There exists interference between neighboring nodes, which implies not all the nodes are allowed to communicate simultaneously. In [3], interference is taken into account by adding constraints to the optimization problem that only a subset of nodes can transmit in the same time slot. However, there are other ways of avoiding interference, such as using random access.
  • Because of multipath propagation or shadowing, the wireless fading channel is time-varying, which means the channel capacity for each link is not a fixed value, but a random variable depending on the channel condition of that link and power allocated to it.
  • In cognitive radio networks, the available frequency bands at each node are different, depending on its location and environment. Again, the available bands at each node can be modeled as random varibles [4]. In addition to rate control, we have to do frequency control which again is a power control problem, i.e. allocating limited power among available frequency bands.

When taking all these randomness of cognitive radio networks into account, the rate control problem is no longer a deterministic optimization problem. Instead, the session rates update step as described in (1) becomes a stochastic dynamic programming problem

\displaystyle   J_i(x_k) = \sum_{i' \in B_i} \underset{x_{k'} \leq x_k}{\text{max}}\{ J_{i'}(x_{k'}) - E_{\vec w_k}[c(x_{k'},\vec w_k)] \}, \ \ \ \ \ (2)

where {\vec w_k} is a vector containing all the random variables described above.

The goal of this project has three aspects. First, we assume that power is fixed and full channel and frequency information is known and solve (2). This is not a difficult task, but from which we wish to get some intuition about how the optimal control policy changes as network characteristics change. Second, develop power control algorithm which is expected to be distributed while still assuming we have perfect prior knowledge about channel and network conditions. Finally, develop stochastic learning algorithms [5] that solve optimal rate and power control problem without using any prior information about channel and network. This is extremely important because in practice full channel and network information is either inaccurate or unavailable.

{99}

[1] S. McCanne, V. Jacobson ,and M. Vetterli, “Receiver-driven layered multicast”, in Proc. ACM SIGCOMM, Stanford, CA, Aug. 1996, pp. 117-130.

[2] K. Kar and L. Tassiulas, “Layered multicast rate control based on Lagrangian relaxation and dynamic programming”, in IEEE J. Sel. Areas Commun., vol. 24, no. 8, pp. 1464-1474, Aug. 2006.

[3] L. Bui, R. Skirant, and A. Stolyar,“Optimal resource allocation for multicast sessions in multi-hop wireless networks”, in Phil. Trans. R. Soc. A, no. 366, pp. 2059-2074, Mar. 2008.

[4] Y. Shi and T. Hou, “A distributed optimization algorithm for multi-hop cognitive radio networks”, in Proc. IEEE INFOCOM, pp. 1292-1300, Apr. 2008.

[5] A. Ribeiro, “Stochastic learning algorithms for optimal design of wireless fading networks”, submitted to IEEE INFOCOM, Jul. 2009.

Optimization of Complex Models of Socio-Economic Systems

October 22, 2009 by

1. Introduction

Socio-economic systems’ models usually have intricate structure and are stochastic. While this complexity in the models comes deliberately from the desire to capture and explain the behavior of the systems (white box models), it usually requires familiarity with the model to be able interpret and analyze results. Recently, there has been a strong initiative in many fields such as economics, decision sciences, and psychology to have descriptive white box approach to modeling phenomenon. Yet, these models did not meet the expectations of many. This is mainly due to lack of evolved end analysis tools that help verify, validate, fine tune, and optimize the models.

This study focuses on applications of stochastic control tools to models that use complex socio-cognitive agents for simulating socio-economic systems. The models are agent based models. The current aim is to be able to optimize the decisions of a certain agent within the simulation to reach a desired goal given a simulation time period. An immediate relaxation to this goal that complicates the process is that the simulation ends when the desired goal is reached (possibly infinite horizon). Also notice that current aim assumes that an outside oracle with perfect knowledge is dictating the agent’s decision making. It might be easier to think it as the agent is disabled in the simulation and the optimization is playing its role. Another more complicated but slightly more realistic way to look at the same goal is to have an agent with partial information that is giving its decisions based on his “belief state” and an approximate DP running at the back end. Currently, without much research on the possible techniques, it seems certainty equivalent (CEC) or rollout policies are suitable to apply to the problem.

The first effort for this proposal is to derive a set of parameters that will define the state at a certain step. This set is defined as the minimum dynamic set of parameters that is sufficient for the code to obtain the next state. The next section explains the progress so far. Since the mathematical model of the system, {f(x_k, u_k)}, that is necessary to determine the next state of the world is impossible to formulate, simulation will be the key for approximating the cost. Definition of a cost function will be defined as usual {c(x,u)}.

The parameters in the state space are often continuous within a range. Hence discretization of them will be necessary. I am currently working on a simplified model with few agents. The number of parameters per agent seems to be high and grows with increasing number of agents, i will try to aggregate some of the parameters or try to use other techniques to reduce the size of the state space when necessary.

2. Problem Formulation

The agents in the world base their decisions solely on the current state of the world. Each agent perceives the state of the world, and other agents around. The agents are socio-cognitive i.e. they are aware of the agents around them, have feelings of their own and toward other agents. They develop emotions based on the actions of their own and actions that they perceive. Each agent’s action has a certain impact on determining the next state of the world. The next state of the world only depends on the actions taken in the previous step. Stochastic nature of the model comes from the randomness in the result and effect of the actions.

The decision of an agent ({i \in A}) depends on his values ({V_{ik}}) and context ({{\mathbb C}_{k}}) at time {k}. {{\mathbb C}_k} determines the possible set of actions that an agent, {g} can choose from at time {k}. Each action has an actor and a target, {t \in G}. Hence, parameters of all agents belong to {{\mathbb C}_k}. It is reasonable to assume that the set of agents during the simulation is fixed i.e. {\textrm{card}(A_k) = \textrm{card}(A_0) = n \textrm{ }\forall k}. The actions afforded by {t} to the agent, {i} depends on other elements of {C_k} such as relationship between them ({R_k(i,j)\in(-1,1)}). The relationship is set at {k=0} and might change based on the actions taken during the simulation. Notice that the relationship matrix grows with the square of the number of agents, {n^2}. Each action taken in the world has the chance to impact all the agents in the world as they perceive everything that is happening around. This impact is usually captured by the set of variables called activations, {\Omega_{ik}} which are specific to agent, {i} and time {k}. The actor of a decision gets activations directly from doing that action. The actor also gets activations from the result of that action together with other agents (target and perceiving agents).The values of the agents, {V_{ik}}, differentiates the agents from one another. The agents have different weights on different values meaning each agent has varying importance on same set of goals, standards and preferences. These weights on values are static through out the simulation horizon for all agents i.e. their goals, standards and preferences do not change. Each node has two activation tanks called success ({\omega_s \in(0,1)}) and failure ({\omega_f\in(0,1)}). For each agent at each time step, there is a set of activations associated with the values. Hence, we have the following definition

\displaystyle \Omega_{ik} = [\omega_s \textrm{ } \omega_f]_{ik} \qquad \forall i \in A \textrm{ , } \forall k.
One might think the change in these tanks as the derivative of the mood of the agent since feelings of self and toward others are calculated using {\Omega_{ik}}. Hence {\Omega_{ik}} for each agent {i} is part of {C_k}. Although it is dependent on the model structure, the minimal set of parameters that make up the state space are activations and relationship values.

\displaystyle X_k \supseteq [\Omega_{1k} \textrm{ } \Omega_{2k} \ldots \Omega_{nk}\textrm{ } R_k(1,2)\textrm{ } R_k(1,3) \ldots R_k(1,n)\textrm{ } R_k(2,1) \ldots R_k(n,n-1)]

Additionally, my control parameters are the action set that the agent can choose from at time k, {u_{k}}. Notice that optimization is over a single chosen agent’s action space. The system considered can be represented as {X_{k+1} = f(X_k, u_k)} where the mathematical representation of f does not exist.

3. Notes and Agenda

In the current formulation of the state space, each agent has around 100 parameters since the value tree has around forty five nodes which is doubled when split into {\omega_s} and {\omega_f}. Also note that these parameters are all continuous variables. The current step in this project is to finalize the state space of the model i.e. add model specific parameters to the state space which might mean modifying the model at hand (because i am trying to keep the state space as small as possible). Next step is to obtain an initial cost function. In the mean time, i am looking into literature that focuses on control policies when mathematical model of the system is not available. Once, the literature that is going to be used is finalized, application remains.

Dynamic Programming for Real-time Scheduling of Control Systems

October 22, 2009 by

1. Problem Statement

In this project, I will consider the problem of real-time scheduling for a control system. The control system consists of a discrete-time LTI plant and a discrete-time LTI controller. The plant has the following state-space model:

x_{k+1} = A_P x_k + B_P u_k
y_k = C_P x_k

where {x_k \in \mathbb{R}^n}, {u_k \in \mathbb{R}^m}, {y_k \in \mathbb{R}^p}, for {k = 0,1,\dots}, and {A_P, B_P, C_P} are real matrices of appropriate dimensions. The controller is designed for the plant using standard linear control theory and has the following state-space model:

z_{k+1} = A_C z_k + B_C y_k
u_k = C_C z_k

The controller is designed with perfect communication between the plant and the controller: at the beginning of each time period, the controller receives the full output vector {y_k} of the plant, and at the end of the period, it sends the full control vector {u_k} to the plant.

In this project, I consider the situation when the communication between the plant and the controller is imperfect. Specifically, in each period, only a subset of the plant outputs can be read by the controller, specified by a schedule {\sigma_y(k)} which is a vector in {\{0,1\}^p} where a {1} element means the corresponding output value is read. Similarly, in each period, only a subset of the control values can be sent to the plant, specified by a schedule {\sigma_u(k) \in \{0,1\}^m}. The problem I aim to solve in this project is the scheduling problem, i.e. finding the schedules {\sigma_y(k)} and {\sigma_u(k)}, so as to minimize the difference between the resulted closed-loop system and the closed-loop system with perfect communication (called the ideal system). The difference between the two systems is measured as

\displaystyle   J(x_0) = \sum_{k=0}^\infty \|y_k - \tilde{y}_k \|^2 \ \ \ \ \ (1)

where {y_k} is the output of the ideal system, and {\tilde{y}_k} the output of the system with schedules {\sigma_y(k)} and {\sigma_u(k)}, both starting from the same initial state {x_0}.

2. Proposed Scheduling Algorithm

The scheduling problem can be formulated as a DP, with objective function (1) and controls {\sigma_y(k)} and {\sigma_u(k)}. For realtime scheduling, I propose a receding horizon algorithm: in each period, the scheduler solves the DP for a finite horizon, then the first schedules of {\sigma_y} and {\sigma_u} are used, and the process repeats in the next period.

For the project, I will investigate the theoritical and the practical aspects of the proposed algorithm. I will compare the performance of this algorithm with those of other simpler algorithms, such as the round-robin algorithm. I also would like to improve the efficiency of the computation of this algorithm, using for example approximation methods.

3. Project Plan

The project will be carried out in progressive steps as follows:

  • First, I will consider the simple case of full-state feedback, i.e. {y_k = x_k} and no schedule {\sigma_y}.
  • Then, I will progressively make the problem more complicated by adding the schedule {\sigma_y}, using partial information ({C_P \neq I}), and adding disturbances. An observer (a Kalman filter) is probably needed to compute an estimate of the plant’s state; the problem of scheduling Kalman filters has been considered in [1].

References
[1] J. L. Ny, E. Feron, and M. A. Dahleh, “Scheduling kalman filters in continuous time,” in Proceedings of the American Control Conference, 2009.
[2] L. Zhang and D. Hristu-Varsakelis, “Communication and control co-design for networked control sys- tems,” Automatica, vol. 42, no. 6, pp. 953–958, 2006.

Multiarmed Bandit Problems and PAC/Regret Based Formulations

October 21, 2009 by

The multiarmed bandit is a well-studied problem in many different varieties in operations research. For now, one should assume that there is perfect state information.

Let {I} be a finite indexing of a set {U = \lbrace u_i : i \in I \rbrace} of actions, where {\vert I \vert < \infty}, then the state space is a vector {\mathbf{x}} where each entry {x_i} corresponds to an action {u_i}. Similarly, consider a set of functions {\lbrace f_i: i \in I \rbrace}; if at time {k}, action {u_i} is chosen, {f_i: (\mathbf{x}_k, w_k) \rightarrow \mathbf{x}_{k+1}}, where {w_k} is some stochastic disturbance. Principally, {f_i} only maps coordinate {x^{(i)}} to a new state, while the other states are unaffected; however, it need not necessarily be so. Moreover, nor does function {f_i} need to be homogeneous with respect to time.

If one chooses action {u_i} at time {k}, the reward considered will be a function of the current state and the action {R(\mathbf{x}_k, u_i)}, although like above the reward function need not be be homogeneous with respect to time. Then in the framework of dynamic programming, the problem can be expressed familiarly as follows:

\displaystyle  J_N(\mathbf{x}_N) = \displaystyle \max_{u \in U} R(\mathbf{x}_N, u) \ \ \ \ \ (1)

\displaystyle  J_k(\mathbf{x}_k) = \displaystyle \max_{u \in U} R(\mathbf{x}_k, u) + \mathbb{E}_{w_k} J(f_u(\mathbf{x}_k)) \ \ \ \ \ (2)

To adapt the problem to an infinite horizon, one adds an additional action which returns a reward of {M}, but at the cost of retiring. Let {\alpha} be the discount factor for the period, and {J(\mathbf{x})} be the optimal reward attained starting from {\mathbf{x}}.

\displaystyle  J(\mathbf{x}, M) = \displaystyle \max (M, \displaystyle \max_{u \in U} L^u(\mathbf{x},M,J)) \ \ \ \ \ (3)

\displaystyle  L^u(\mathbf{x},M,J) = R(\mathbf{x}, u) + \alpha \mathbb{E}_w J(f_u(\mathbf{x}),M) \ \ \ \ \ (4)

The optimal policy for the infinite horizon problem is computed by Bertakas volume Two.

In principle, if one knew the reward function as well as the distribution of the disturbances, maximizing the expected reward would be less than trivial. However, in practice, this information is unobtainable. The intersection of this problem with machine learning is an automated schema for producing the “best” possible policy with respect to the optimal policy, when one possesses absolute information.

For this project, one is interested in comparing two competing methodologies of handling the problem. The first using the probably approximately correct (PAC) framework, which will first sample actions to best approximate the model, whose true description is assumed to be unknown, studied by Even-Dar and Mannor [1]. However, the most important question in this learning scheme is when one ought to stop learning. As learning comes at a cost, one should learn long enough to obtaina “good” approximation, but no longer as to maximize rewards. Even-Dar and Mannor also study a model-free learning scheme called Q-learning which also hopes to achieve this same end.

In removing the model, Q-learning relies on a mechanism called action elimination, which judges certain actions to not belong to the optimal policy and stops the algorithm from further sampling from them. The necessity for action eliminiation is due to the very heavy computation of Q-learning, as well as, the very long convergence rate of its estimation. In Q-learning and PAC, the algorithm is simple enough that one is able to prove attractive theorems regarding the distance from the optimal policy the action eliminiation based algorithm will achieve. The primary mathematical tool used to achieve these are typically concentration inequalities such as Hoeffding and Bernstein.

In contrast to the above, is the regret-based minimization approach introduced by Kleinberg. The goal of this project will mainly to understand the theoretic differences between the two methods, as well as the typical tradeoffs between choosing one over the other. Further, if time permits, perhaps, some empirical evaluation on the relative performance of some representative algorithms.

[1] Eyal Even-Dar, Shie Mannor, and Yishay Mansour. Action Elimination and Stopping Conditions for the Multi-Armed bandit and Reinforcement Learning Problems. Journal of Machine Learning Research, 7. 2006.

Distributed Model Predictive Control

October 14, 2009 by

1. System Definition

We consider the case of a large scale state space system with dynamics

x(k+1) = A x(k) + B u(k)

y(k) = C x(k)

where this is system can be decomposed into a collection of {N} individual plants with an added dependence on neighboring plants.

x_i(k+1) = A_ix_i(k)+ B_iu_i(k) + \sum_{j\in \mathcal {N}_i} W_{ij} x_j(k)
y_i(k) = C_ix_i(k)

The plant dimensions are defined: {x_i\in {\mathbb R}^{n_i}}, {u_i\in {\mathbb R}^{m_i}} and {y_i \in {\mathbb R}^{p_i}}. The matrix {W_{i,j}} encodes the dependence between the plants {i} and {j}. The graph {G=\{\mathcal{V}, \mathcal {E}}\} is an abstraction of the network of dependencies. The Vertex set {\mathcal V} contains the plants {\{1,...,N\}} and the edge set is defined {(i,j)\in \mathcal{E}} when {W_{i,j} \not = \mathbf{0}\in {\mathbb R}^{n_i\times n_j}}. The set of plants on which the update in plant {i} depends is the neighborhood {\mathcal {N}_i =\{j\in\mathcal {V} : (i,j)\in \mathcal{E}\}.} Initially I will assume that the local plant state is completely observable {C_i=I_{n_i\times n_i}}. This assumption can be relaxed after the fact and local observers would need to be employed.

2. Control Problem

The control problem we are concerned with is the standard optimal control problem defined in the centralized case:

\displaystyle J^*(x(0))= \min_{u\in U(x)} \sum_{k=0}^{K-1} \left[x'(k) Q x(k) + u(k)'R u(k)\right] + x(K)'Qx(K)

We are interested in solving the above optimal control problem for the control sequence {u(0),… u(K-1)}. We apply control {u(0)} computed for our current state {x(0)} and update the state. At the next time we call our current state {x(0)} and recompute the optimal control sequence.

The solution to this problem is an extremely large scale constrained quadratic program. We would like to construct a distributed solution where each plant solves a local optimal control problem, {J_i^*(x_i(0))} which recovers the solution to the centralized optimal control problem. We would like to show that {J_i^*(x_i(0))} is a quadratic program which can be solved using local information exchange similar to what is discussed in \cite{concon}.

3. Related work

Distributed receding horizon control is considered in \cite{nader06}, however here coupling is caused by the sparsity pattern of the {Q} and {R} matrices in the centralized optimal control problem formulation. The dynamics of the individual plants contained had no interdependence not induced by the control. In my work, I am concerned with interdependent plants and I will establish reasonable restrictions on the sparsity patterns of the matrices {Q} and {R} as I proceed. One reasonable assumption that I am considering is to require {Q_{ij}= \mathbf{0}\in {\mathbb R}^{n_i \times n_j}} and {R_{ij}= \mathbf{0}\in {\mathbb R}^{m_i \times m_j}} for all {(i,j)\not \in \mathop{\mathbb E}}. This would require that the cost dependence also be local. So long as both the state interdependence and the control interdependence is a relatively sparse I can construct a relatively sparse graph {G} for which both the control and state dependencies are non zero only with respect to neighboring plants.

4. References

[1] N. Motee and A. Jadbabaie, Receding horizon control of spatially distributed systems over arbitrary graphs, cdc, 2006.
[2] A. Nedic, A. Ozdaglar, and P. Parrilo, Constrained consensus and optimization in multi-agent networks, Tech. Report 2779, LIDS, 2008.

Project Proposals

October 7, 2009 by

Your project proposals are due on Wednesday, October 21. You need to give me a copy (hard copy or email), and in addition, you must post your proposal on this blog. The reason is that I want your classmates to see and to make comments about your project. Everybody will have to make at least one technical suggestion about some other project, I will announce the (random) pairings later. People in the class have diverse technical backgrounds that can complement yours, and I encourage you to discuss your projects beyond what I formally request.

WordPress supports Latex. See the FAQ. Still, it’s not that easy to use directly if you have a lot of equations, so I recommend that you use for example the little converter LaTeX2WP written by Luca Trevisan. With this tool, you can use the same file to prepare a hard copy for me the html code for this blog.

TODO: please create an account on WordPress.com by Wednesday, Oct. 14. Then send me the email address that you used to create that account, so that I can add you as a contributor to the blog.