Submodularity in Sensor Placement Problems

Many problems in Artificial Intelligence and in computer science in general are hard to solve. What practically this means is that it would take a computer probably hundred/thousands/millions of years of computation to solve it. Thus, many scientists tend to create algorithms that approximately solve difficult problems but in a sensible time period, i.e., seconds/minutes/hours.

A problem like this is the sensor placement problem. The key question here is to find a number of locations to place some sensors in order to achieve better coverage of the interested area. In order to solve this problem the computer has to compute all the possible combinations of placing the sensors we have in all different locations. To give some numbers, having 5 sensors and 100 possible locations, one has to try 75287520 combinations in order to find the best arrangement. Imagine what happens when the problem is about placing hundreds of sensors in a city where there are hundreds or thousands of options.

In such problems submodularity comes handy. It is an extremely important property used in many sensor placement problems.  It is a theorem that describes the behavior of functions. In particular, the main idea is that an addition to a small set has a higher return/utility/value rather than adding the same thing to a larger set. This can be better understood with an example. Imagine having 10000 sensors scattered in a big room taking measurements of the temperature every 2 hours. Now imagine adding another sensor to that room. Have we really gained much for doing so? So, we have a large set and we add something. Similarly, imagine the same room having only 1 sensor. Adding 1 more can give us better understanding of probably some corner or get a better estimate of the true average temperature of the room. So, this sensor was much more valuable to have that in the previous case. This is what i mean by saying that adding something to a smaller set has a higher utility.

It turns out that this property is very useful at maths and in computer science and AI in particular as it allows us to build algorithms that have theoretical guarantees. It has been proved that a greedy algorithm has a 63% of the optimal algorithm in terms of performance. This was initially proved from Nemhauser in maths contents and later from Krause et. al in the field of computer science and especially for the sensor placement problem. The image below shows this property in terms of diagrams to get a better feeling of what this property is about.

Submodularity (taken from Meliou et al. power point presentation)
Submodularity (taken from Meliou et al. power point presentation)
Advertisement

2 thoughts on “Submodularity in Sensor Placement Problems

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s