097 - Use of ML techniques to optimize the adaptive beam-steering capabilities of CRPAs for GNSS user receivers
DESCRIPTION
Controlled Reception Pattern Antennas (CRPAs) in GNSS receivers are known as a fundamental resource for jamming and spoofing counteraction; they are excellent for multipath mitigation too as well as NLOS signals exclusion. CRPAs in L-band impose arrays with significant dimensions (more than four antenna elements quickly reaching square-meter areas). Further improvements can be obtained if a more complex architecture based on spacetime adaptive processing (STAP) or space-frequency adaptive processing (SFAP) is used. This generally allows much higher levels of jammer cancellation against a wider range of threats. Compact arrays, with inter-element spacing significantly lower than the half wavelength, reduce the physical dimension at the cost of suboptimal spatial filtering performance; on the other hand, the loss can be balanced thanks to an increased number of elements (“dense” array). A trade-off between narrow inter-element spacing and number and placement of antenna elements is worth investigation. Real-time adaptive beam-steering capability is necessary to maximize the efficiency of the spatial filtering, since the directions of the signals of interest and of the interferers to suppress are unpredictable and vary over the time. In the perspective of boosting the potential effectiveness of CRPAs for GNSS receivers, advanced real-time adaptive architectures with medium number of antenna elements, compact dimensions, and possibly space-time adaptive processing, are extremely promising with the specific purpose of interference, spoofing and multipath suppression. However, because of the considerable number of variables to adapt in real-time, such an optimization problem is complex to model and to resolve in an efficient way.
The objective of the activity is to investigate the usage of Machine Learning (ML) algorithms for smart antenna arrays design (including the optimisation of array geometry and beamforming), which has been recently discussed in an increasing number of publications, typically addressing the communication domain (5G/6G in particular).
Many ML algorithms exist, for example multi-objective genetic machine learning, multilabel convolution neural network and Support Vector Machine algorithms. However, a clear consensus about the best choice for antenna array optimization has not been reached yet. Furthermore, the training model for the problem of optimizing the spaciotemporal filtering capability of a compact array, in the presence of unpredictable and time-variant signal geometry, still needs to be identified. Effectiveness of a ML design has to be proven with respect to more conventional CRPAs.
On the subject, the interest of several actors is expected:
- Research institutions for the development of the models, preliminary studies and trade-off analyses
- High-end antenna technology providers, for the identification of the technological constraints and drivers and the validation of the proof of concept
- GNSS receivers’ industry, for the understanding of the application context and expected market needs.
The tasks to be performed shall include:
- The proposed activity targets a new approach in the real-time adaptive control of CRPAs for GNSS, based on the use of ML to optimize in real-time the CRPA spacetime filter coefficients.
- The possible use of dense arrays will be investigated, with trade-off analysis with respect to conventional CRPAs.
- In the ML context, objectives should be
- the formulation of the optimization problem,
- the definition of the training model and,
- generation of the datasets.
The main outputs of the activity will consist of:
- Choice of the best ML algorithm family for the adaptive CRPA control
- Trade-off analysis of the addressed CRPA architectures
- ML training model formulation and guideline for the generation
- ML training datasets definition (examples for a few relevant use cases)