Use cases


Automatic analysis of tenders market and predictive success scoring

In France, every year, 150,000 contracts with a unit value of more than €90,000 excluding VAT are awarded in public market tenders for an average amount of €90 billion. This concerns many areas of economic activity. Many companies spend a lot of time studying and analyzing them to decide whether or not to respond, with no guarantee of results. Their statistics, if well maintained, give them a first indicator.

With Machine Learning, we go further. It allows the integration of additional data to correlate the elements requested in the market tender with the characteristics of the company, while taking into consideration the evolution of these characteristics and also of external data.  

  • Improved process and productivity through automatic analysis of market tenders.

  • Access to the scoring and decision support tool for all employees for a collegial analysis and minimization of errors of appreciation.

  • Determination of a prediction score for winning tenders.

  • Necessary data history on past tenders and their results

  • Determination of the completeness of the data concerned in the RFPs and the respondent company’s data

  • Update over time the evolution of the characteristics (strengths and weaknesses) of the respondent company to avoid model biases

  • Build and train a machine learning model

  • Real-time automated prediction generating a confidence score of the ability to win the market tender

  • Study time divided by 10 – Potential business flow multiplied by 5

Use cases


Creation of diagnostic tools

Eating disorders affect 1 million people (mostly women) in France, including 600,000 young people under the age of 35. 2 to 5% of the world’s population is affected by its various forms: overeating, bulimia and anorexia. These disorders lead to health complications throughout life and increase the risk of early death. Diagnosis is hard due to the difficulty that patients have to talk about it.

The artificial intelligence implemented thanks to AI Parc will allow to automate the detection, the prediction of risks, the assistance in choosing the care journey and the patient support.

  • Detection, diagnostic support, assisted analysis

  • Patient pathway analysis

  • Accompaniment of the patient in good eating behavior

  • Difficulty in making a diagnosis and defining a treatment

  • Recovery of data

  • NLP semantic analysis for Bert model creation of a patient self-evaluation Bot

  • Use of neural networks

  • Patient journey analysis by statistics and Machine Learning

Use cases


Gap analysis tool between training needs and offers

In the context of improving the performance of a community’s apprenticeship training policy, the analysis gaps between the needs and the apprenticeship contract offers is significant.

The Machine Learning implementation to analyze the gaps between the supply of contracts and the demand for apprentices is based on 4 criteria:

  • the profession

  • the level of diploma being acquired

  • the geographical area

  • the age of the apprentice

The visualization application helps to quantify and qualify the gaps between supply and demand in order to decide on appropriate measures to reduce these gaps.

  • Creation of a predictive tool to help choosing an apprenticeship policy

  • Visualization of offer and demand curves of apprenticeship contracts with applicable criteria and filters

  • Self-assessment of the forecast quality (allowing to detect possible missing data)

  • Progressive integration of new data sources

  • Time to analyze and extract useful data from the learning contract databases

  • Data analysis via EleasticSearch and development of a multi-criteria analysis prototype.

  • Use of Machine Learning, time series, clustering and forecasting using Prophet

  • Model creation in a collaborative IA PARC project, published, validated by the test team and scaled for deployment on a curve and prediction data visualization application

Use cases


Quality control on automotive parts production line

In a manufacturing unit producing parts for the automotive industry, 3×8 operators visually inspect parts at the end of the manufacturing line. Their task is to remove machined parts with defects.

The work is tiring, not rewarding and performance is limited. Detecting failures is also difficult.

With Deep Learning, it is possible to detect and remove automatically defective parts at the end of the line.


  • Valuation of the work of the operators at the end of the production line by making their task evolve

  • Automation of the control by fixed cameras connected to a control PC carrying out the analysis in real time of the images with a pre-trained AI.

  • Improvement of detection performances

  • Lack of data history

  • Generate large amounts of data using gantries equipped with cameras to take pictures of 10 features of defective parts

  • Automatic learning on the IA PARC of several hundred shots from different angles and by type of defect possible

  • Real-time automatic detection of defective parts coming off the production line

  • On-site installation of a dedicated IA PARC

  • Processing software with operator station and camera control

Use cases


Sensitive industrial maintenance

The project targets industries wishing to implement supervision and decision support tools related to predictive maintenance.

Fastpoint and MomentTech are combining their expertise to collect and deliver data from industrial equipment (example : radiation protection measurement equipment) from the most constrained industrial sites in order to analyze and exploit it on neural network models that provide better results.

The NucleoT solution objective is to inform operators and subcontractors on the predictive maintenance actions to prioritize.

  • Ability to predict equipment failures and malfunctions as early as possible


  • Gathering, securing and transmitting data from radiation protection equipment

  • Weak signals identification in radiation protection

  • Reliability of data interpretation made possible by deep learning algorithms

  • Implementation of synchronization mechanisms for models/data sets used by AI

  • Build of scalable and customizable predictive models on radiation protection data

  • Models training and optimization

Use cases


Face detection, increased resolution, facial recognition and real-time identification

Facial recognition: an obvious issue in the field of security.

A security company wants to allow its camera network to continuously film the crowd in a shopping mall in order to identify a wanted person.

Image recognition is less known in the health field, however it represents a great need, especially in the detection of diseases or advanced diagnosis through radiological images or analysis results.

  • Simultaneously and in real time, analysis of all faces in a space

  • Filtering by criteria and extraction of approximate faces

  • Increase of the resolution to improve the sharpness of the face

  • Comparison with a database and identification

  • Multiplication of cameras to be installed as the crowd moves in all directions

  • Recognition of an individual hiding behind an attribute

  • Combination of algorithms : RetinaFace (face localization in the environment), Guided Super Resolution without facial landmarks (apply super resolution to small and blurred images) and Pose-Robust Face Recognition via Deep Residual Equivariant Mapping and shape reconstruction
  • Model training

Use cases


Crisis management: contingency planning

Contingency planning is implemented by public authorities, emergency services or private companies. It consists of creating a mechanism of anticipation to respond to any kind of crisis: a war, a pandemic, a natural or technological disaster (factory fire, cyber-attack, faulty or dangerous product…). Contingency planning helps mobilize the necessary resources to ensure the safety of people and property. 

AI will enrich the historical data of conventional planning with more advanced Machine Learning algorithms while taking into account internal data, and external data for a detailed analysis of variables.

The Covid pandemic was a major field of application for machine learning, particularly for predicting the spread of the virus, communicating and providing relevant information remotely, developing telemedicine and ensuring food supplies.

  • Detection of Covid and prediction of its expansion using Machine Learning algorithms

  • Anticipation of the consequences of its propagation on the whole social, economic, health and institutional activity of the country

  • Collection of data contained in the health reports of 65 countries and as many languages, the location of animal diseases, added to the data of airlines.

  • R tool implementation which, in addition to the language, provides an environment for data processing, modeling and visualization.

  • Manipulate data by integrating a series of statistical, clustering, classification, analysis and graphical techniques

  • Finding and optimizing the best model for an easy and intuitive implementation made possible by choosing the best Machine Learning frameworks: Scikitlearn, Tensorflow or Pytorch

Contact us