Applied mathematics involves applying mathematics to problems encountered in practice. Here we cited
1)space satellite dynamics impacted by solar pressure
2)air pollution monitoring by lasers using Radon transforms
3)statistical data analysis using Levy stable distributions
4)statistical hypothesis testing using Levy stable distributions
5)Chernoff bounds for detection of different Markov processes
Why do space satellites need rockets once they reach orbit? Space satellites are perturbed by many slight forces once they reach orbit, deflecting the space satellite away from its desired orientation to point at the earth, the sun, a distant planet or moon or star. By far the largest perturbing force is the pressure of sunlight itself: one dyne per square meter, the equivalence of a house fly pushing off on a one square meter area, very small but always present.
This suggests taking advantage of this force to control the orientation of a space satellite by attaching mylar sails to each of four sides, with the sails functioning as a weather vane that points the satellite at the center of the sun. By rotating the mylar sails about their longitudinal axis, torques are generated that point the satellite at the earth all the time. The limitation in force requires the space satellite to be physically small, roughly 20 kilograms; if the sails are too long, they can flex leading to structural dynamic issues.
Atmospheric pollution is a great concern for health. The first issue in atmospheric pollution is to measure the pollution, but how? Typically, as one example, air pollution is generated from an electrical power plant spewing particles of burn matter into the atmosphere. How to measure the density of these particulates at each location is a measure of air pollution.
We propose using lasers to shine across a region and hit a detector; the lasers provide cross sections of the air density, attenuated by the air itself and the pollutants contained in the air. We use computer assisted tomography to generate estimates of air pollution density.
This is a case study in how to statistically characterize a continuous or discrete time random process, following the methodology of the statistical sofware package S and its open source version R.
The central limit theory of probability theory roughly states that if a sum of independent identically distributed suitably bounded random variables, properly normalized and centered, converges to a Gaussian distribution. In the 1930s, Paul Levy showed that if the suitably bounded restriction was replaced by one or a few contributions being much larger than the remainder, then convergence still held, but the resulting distribution had a Fourier transform of the form exp(+i*delta+beta*(abs(w))^^alpha) where w is the transform variable, and 0<alpha<=2. The resulting distributions were called stable because linear combinations of these random variables resulted in a stable random variable of the same class of distributions. This class of models was fit to noise on analog telephone lines, and this in turn led to a program where hypothesis testing of each of the parameters was tested in a discrete time case and also in continuous time (which results in trivial pathologies since the sample paths have infinitely many hops of any size whatsoever), in a generalization of scalar Kalman filtering, and in a method for generating stable random variables.
Stable distributions are characterized by four parameters (location, dispersion, skewness, tail). In the Gaussian case, the hypothesis testing for location involves a linear test, while for the non-Gaussian case the outliers are trimmed or weighted far less. For the Gaussian case, the hypothesis testing for dispersion involves a chi-squared test, while for the non-Gaussian case the outliers are trimmed or weighted far less.
In continuous time, stable distribution sample paths consist of a series of discrete jumps; because the jumps are uniquely identified by the parameters, an arbitrarily small time interval can be observed to provide a perfect resolution. The one exception to this is the stable distribution with alpha=2, the Gaussian distribution, which has continuous sample paths (no jumps) but is nowhere differentiable.
Chernoff bounds are widely used in detection and estimation of signals in noises. Here a generalization is offered for discriminating between two Markov stochastic processes.
A sample path of an independent increment process is sampled at evenly spaced time instants; the process can have one of two sets of parameters. How do we chose the sampling interval to increase the likelihood of selecting the correct process.
A scalar discrete time autoregressive symmetric stable process is filtered, providing a natural generalization from the Gaussian distribution to the symmetric stable distribution.
A logarithmic utility function is used to determine the returns for a log stable financial instrument. The numerical results suggest that the returns are relatively insensitive to parameter choices.
A method for simulating stable random variables is presented. It is a generalization of a method for generating two Gaussian random variables from two uniformly distributed random variables, by taking -ln(uniform1) for the radius and sine and cosine of 2*pi*uniform2 to determine two random variables. The stable case involves a nonlinear transformation of two uniform random variables to simulate one stable random variable.
We survey the research program in applications of stable distributions: data analysis of telephone noise, hypothesis testing of discrete and continuous time processes, linear minimum error dispersion filtering of symmetric processes, simulation of stable random variables, and explicit solutions to single period investment returns for log stable financial instruments.
A communication link has a transmitter and a receiver. The receiver has a limited amount of storage: if the transmitter sends messages too fast, the receiver will have no place to store these transmissions. In order to pace the transmitter so no messages are lost, the transmitter has a window: if it sends the maximum number of messages in the window and does not get an acknowledgement of successful reception then the transmitter will stop until it gets an acknowledgement. This results in a well know double buffering strategy, where the transmitter is transmitting a message while the receiver is unloading a message, so the receiver has two buffers for message storage. This double buffering strategy takes advantage of one processor dedicated to processing input and one processor dedicated to handling output; the double buffering is what is employed in so called time slot interchange switching matrices that use one buffer for the first time slot that is being filled while the second buffer is used for the second time slot that is being emptied.
A widely deployed transmission facility is T1, which has 24 streams of 64 kbps traffic. If T1 is engineered to have no more than 1% of the streams being blocked at any one time, then this results in perhaps 18 streams being used for voice, leaving 6 x 64 kbps or 384 kbps available for data. In fact, this type of analysis is very misleading, and leads to the false idea that all is well, when it is not! The time scale for voice streams is 2-4 minutes, while the time scale for data streams is 20-40 msec, two orders of magnitude mismatch. What in fact happens is that when the rare event of full blocking occurs for 1% of the time, the link is handling nothing but voice for 2-5 seconds, and as far as the data is concerned, the link has failed, and this will trigger a restart of the data communications activity spanning tens of seconds potentially!
In a computer network, one method for modeling behavior to determine mean throughput and mean delay is to view a network of queues with the input for one queue being the output of another queue. Typically the queues are serially reusable resources, with one or more processors. If there are external inputs, they are modeled as Poisson streams of independent identically distributed tasks. The flows between the queues are not Poisson, but in fact the mathematical analysis of such a system suggests that the queues behave as if the traffic input to each stage were Poisson. One of the key things about such a system is to find bounds on mean throughput and mean delay; if the desired performance is outside of this region, it will be impossible to achieve desirable performance, and a redesign is needed. Two key regimes exist, one where all traffic is deterministic and the processing times are deterministic, which provides the best possible performance, and provides an upper bound on mean throughput and a lower bound on mean delay, and a second regime where all processing times are independent identically distributed exponentially distributed, as are the input streams, which results in an upper bound on mean delay and a lower bound on mean throughput.
Priority scheduling is critical when there is a single serially reusable resource such as a processor or a table in a database that must be accessed by all tasks. This is an obvious bottleneck, and the only way to ameliorate performance is through priority scheduling. As tasks arrive and are queued up, each task has a unique arrival time and an associated priority; the priority might be a static number, or it might be a window of time that is the acceptable total time for processing. There is only one queue, but how tasks are entered into the queue is determined by the arrival time and the priority. In an interrupt driven processor, this can also lead to overhead time to stop processing, examine the task that just arrived and determine where to insert it in the queue based on its priority, and then return to process the highest priority task. The overhead in doing so is incurred by the processor which also must process the tasks; the overhead is not useful work, but it is necessary and it takes time away from processing useful work. This can lead to a sustained busy period where the processor spends more and more time on overhead and less and less time on useful work, leading to an overload and a collapse in meeting priorities.
The mathematical analysis of this type of priority queueing system, under the assumptions of a Markov modulated arrival process, and independent identically distributed processing times for each task, can be handled by a technique pioneered by William Feller in An Introduction to Probability Theory and Its Applications, Volume II, and this technique is described in Chapter 10 of A Computer and Communication Network Performance Analysis Primer.