Greenhouse farming plays a key role towards Qatar's self-sufficiency in food production and is actively promoted by the government. Based on these facts and aligning to the global trend for the use of intelligent technologies in agriculture, in this project we will develop a modular cyber-physical system integrating multi-robot technologies and AI for precision agriculture (PA) in greenhouse farming. We will address key aspects of PA for the automated and intelligent gathering of time-series data and their processing for building analytics to support management of quality- and production-critical decisions. Using mobile robotic technologies enhanced by active vision, arm manipulators, and ambient sensing, we will develop automated services for the annotated acquisition of multi-sensorial data of plants, fruits, and environmental conditions. Acquired data will be used for incrementally building time-dependent AI / deep learning models which will be integrated with agriscience knowledge and methods, for: Predicting amount and quality of the yield; Defining optimized schedules for harvesting; Time-tracing evolution and quality of individual and groups of fruits; Labeling the final quality of individual fruits and of the crop; Identifying presence or insurgence of diseases and pests; Profiling environmental conditions inside the greenhouse and support online HVAC (Heating, Ventilation, and Air Conditioning) control to maintain optimal conditions. Project will develop and integrate the following modular components: Autonomous multi-robot system (MRS) for active field data gathering; AI / deep learning modules, for building analytics models and supporting decision-making; Graphical user interface, for real-time monitoring and interaction. The MRS will feature ground robots that will move across greenhouse's rows equipped with robotic arm and imaging sensors (RGB, multi-spectral cameras). Exploiting arm's DOFs multiple images, annotated in time and position, will be acquired at different close viewpoints, overcoming local occlusions. Optionally, a robot carries (and recharges) a small flying multirotor, that allows to take data at higher elevations and from any desired viewpoint. Robots are also equipped with sensors for environmental measures (temperature, humidity, CO2). The use of an MRS will allow to efficiently perform data acquisition in parallel over different rows. MRS actions will be performed routinely (e.g., daily), such that processing will be incremental. Collected data will be fed to AI models to incrementally construct time-dependent information maps aimed to support timely decision-making (e.g., when to start harvesting, use pesticides on selected plants, tune irrigation). Analytics and online system status will be presented to the end-user through a graphical web app. The AI models for analytics are the main project outcomes from the point of view of the end-user: project provides expert intelligence for greenhouse management. This target aims to match labor scenario in countries such as Qatar where there's ample availability of relatively low-cost workers that can be effectively used for manual operations (e.g., picking fruits), instead, skilled professional that are necessary to ensure efficacy, quality, and efficiency of processes are relatively scarce and expensive. AI for HAVC control is also specifically designed for Qatar, where extreme weather conditions make it challenging reaching healthy both environmental conditions and low energy consumption. The services targeted by the project are of general strategic importance in farming. This assertion is supported by noting that multiple commercial products claim to offer equivalent services, based on deploying sensor networks (e.g., RGB). These products are expensive and invasive, requiring ad hoc installation for each greenhouse/plantation. Instead, Greenhouse-5.0 will be designed to be plug-and-play: no greenhouse engineering is required. Moreover, compared to sensor networks, it will acquire more accurate data (images from close by vs. from ceiling, use of RGB + multispectral sensors), it will work in active modality (e.g., to overcome occlusions), it will get better positional accuracy (allowing tracing individual fruits). Overall, our system will be more accurate, cheaper, scalable, easier to maintain, faster to deploy. Project will result into a working prototype implementation of the MRS, AI modules, and User interface, achieving a target TRL >= 6. For data acquisition, experimentation, and demonstration, we will exploit the greenhouse facilities of partner SARSTC and of supporting institution MME. Project is aligned with similar advanced research outside Qatar and will also deliver innovative scientific contributions: (i) novel deep learning / hybrid-convolutional models for detection, counting, localization, and classification in plants, (ii) AI models that learn from minimal datasets, exploit and model temporality of observations, combine RGB and NIR imaging, integrate analytical biochemical models and field knowledge from agriscience, (iii) an original combination of model-based reinforcement learning and transfer learning for online HVAC control. From a systemic point of view, deployment of a modular, multi-purpose cyber-physical system acting autonomously, adaptively and robustly in partially structured environments will be a major contribution. Team members' expertise synergistically covers all project's aspects: CMUQ’s LPI is an expert in mobile robotics, multi-robot systems, AI; NHL Stenden’s PI is ML expert with specific knowledge of vision-based deep learning in agriculture; PI from Milan is an expert in agronomics; Qatar's SARSTC is a national leader in farming, the PI has a solid field expertise; supporting partner MME will provide greenhouse facilities and end-user advice. An external board of experts from Qatar, Netherlands, Portugal, Italy is in place to give strategic guidance to project.