I propose a new tool to characterize the resolution of uncertainty around FOMC press conferences. It relies on the construction of a measure capturing the level of discussion complexity between the Fed Chair and reporters during the Q&A sessions. I show that complex discussions are associated with higher equity returns and a drop in realized volatility. The method creates an attention score by quantifying how much the Chair needs to rely on reading internal documents to be able to answer a question. This is accomplished by building a novel dataset of video images of the press conferences and leveraging recent deep learning algorithms from computer vision. This alternative data provides new information on nonverbal communication that cannot be extracted from the widely analyzed FOMC transcripts. This paper can be seen as a proof of concept that certain videos contain valuable information for the study of financial markets.
We develop a methodology for detecting asset bubbles using a neural network. We rely on the theory of local martingales in continuous-time and use a deep network to estimate the diffusion coefficient of the price process more accurately than the current estimator, obtaining an improved detection of bubbles. We show the outperformance of our algorithm over the existing statistical method in a laboratory created with simulated data. We then apply the network classification to real data and build a zero net exposure trading strategy that exploits the risky arbitrage emanating from the presence of bubbles in the US equity market from 2006 to 2008. The profitability of the strategy provides an estimation of the economical magnitude of bubbles as well as support for the theoretical assumptions relied on.
We develop a new method that detects jumps nonparametrically in financial time series and significantly outperforms the current benchmark on simulated data. We use a long short- term memory (LSTM) neural network that is trained on labelled data generated by a process that experiences both jumps and volatility bursts. As a result, the network learns how to disentangle the two. Then it is applied to out-of-sample simulated data and delivers results that considerably differ from the benchmark: we obtain fewer spurious detection and identify a larger number of true jumps. When applied to real data, our approach for jump screening allows to extract a more precise signal about future volatility.
We’re currently using OmniLedger for logging in to our Matrix-chat and to the c4dt.org website as users. This is explained in more details here: CAS-login for OmniLedger Account management in OmniLedger C4DT partner login Matrix on Mobile There were two elements missing: Automatic signup — in the current signup process, the C4DT admin team needs (…)
We are delighted to announce that 3 additional start-ups have joined the C4DT community through the C4DT start-up program. For two years Duokey SA, Futurae Technologies AG and Nym Technologies SA will complement the already diverse group of partner companies through their start-up perspectives to collaborate and share insights on trust-building technologies. Their agility and innovation of has permitted these start-ups to differentiate themselves in their respective fields.
We are looking forward to an exciting and fruitful collaboration.
Please click below for more information.
By 2025 the total amount of data created, captured and consumed is predicted to reach 175 zettabytes. However, much of that data’s value is being wasted due to distrust, as there is fear that the data would be exposed when used, computed on or shared with collaborators, possibly leading to trade secrets being leaked or data protection legislation fines.
In this report we consider several real-life scenarios that may provoke causal research questions. As we introduce concepts in causal inference, we reference these case studies and other examples to clarify ideas and provide examples of how researchers are approaching topics using clear causal thinking.
The DEDIS team created a first version of the onChain secrets implementation using its skipchain blockchain. This implementation allows a client to store encrypted documents on a public but permissioned blockchain and to change the access rights to those documents after they have been written to the blockchain. The first implementation has been extensively tested by ByzGen and is ready to be used in a PoC demo.
This project aims at increasing its performance and stability, and make it production-ready. Further, it will add a more realistic testing platform that will allow to check the validity of new functionality in a real-world setting and find regressions before they are pushed to the stable repository.
On my path of moving lab’s code to more human friendly program, I usually write some CLIs, to ease configuration and deployment. When developing the client, I want to test it, and see how complex it is to use it. The best language to express that is a shell as it is probably how the (…)
To serve the 80 million forcibly-displaced people around the globe, direct cash assistance is gaining acceptance. ICRC’s beneficiaries often do not have, or do not want, the ATM cards or mobile wallets normally used to spend or withdraw cash digitally, because issuers would subject them to privacy-invasive identity verification and potential screening against sanctions and counterterrorism watchlists. On top of that, existing solutions increase the risk of data leaks or surveillance induced by the many third parties having access to the data generated in the transactions. The proposed research focuses on the identity, account, and wallet management challenges in the design of a humanitarian cryptocurrency or token intended to address the above problems. This project is funded by Science and Technology for Humanitarian Action Challenges (HAC).
Theoretical computer science and computer security
Modulation recognition state-of-the-art architectures use deep learning models. These models are vulnerable to adversarial perturbations, which are imperceptible additive noise crafted to induce misclassification, posing serious questions in terms of safety, security, or performance guarantees at large. One of the best ways to make the model robust is to use adversarial learning, in which the model is fine-tuned with these adversarial perturbations. However, this method has several drawbacks. It is computationally costly, has convergence instabilities and it does not protect against multiple types of corruptions at the same time. The objective of this project is to develop improved and effective adversarial training solutions that tackle these drawbacks.
In this project, we are working with the ICRC to develop technical methods to combat social media-based attacks against humanitarian organizations. We are uncovering how the phenomenon of weaponizing information impacts humanitarian organizations and developing methods to detect and prevent such attacks, primarily via natural language processing and machine learning methods.
To design the circular economy of smart cities, a new way of thinking about urban infrastructures is necessary. whether it is the installation of new waste to energy plants or developing cutting-edge digital systems to safeguard electricity and water, a new way of designing cities is needed.
The IEEE TCCPS Technical Achievement Award recognizes significant and sustained contributions to the cyber-physical system (CPS) community through the IEEE Technical Committee on Cyber-Physical Systems (TCCPS). The award is based on the impact of high-quality research made by the awardee throughout the lifetime. It consists of a plaque and a citation. It was awarded to C4DT affiliated professor Giovanni De Micheli “For sustained contributions to smart sensors, wearable and implanted electronics, and cyber-medical systems.”.
Many popular cloud applications collect enormous amounts of information on their users. […] Raising awareness of the issue, and providing ways to reduce it, is a worthy goal.
Encryption provides a solution to security risks, but its flipside is that it can hinder law enforcement investigations. A new technology called client-side scanning (CSS) would enable targeted information to be revealed through on-device analysis, without weakening encryption or providing decryption keys. However, an international group of experts, including EPFL, has now released a report raising the alert, arguing that CSS neither ensures crime prevention nor prevents unwarranted surveillance.
SafeAI aims to develop cyber-security solutions in the context of Artificial Intelligence (AI). With the advent of generative AI, it is possible to attack AI enhanced applications with targeted cyberattacks, and also to generate cyberattacks that are automated and enhanced via the use of AI. The main goal of SafeAI is the development of a software that enables automated generation of adversarial attacks and defences using AI.
Point-of-Care Ultrasound (PoCUS) is a powerfully versatile and virtually consumable-free clinical tool for the diagnosis and management of a range of diseases. While the promise of this tool in resource-limited settings may seem obvious, it’s implementation is limited by inter-user bias, requiring specific training and standardisation.This makes PoCUS a good candidate for computer-aided interpretation support. Our study proposes the development of a PoCUS training program adapted to resource limited settings and the particular needs of the ICRC.
The collection and analysis of risk data are essential for the insurance-business model. The models for evaluating risk and predicting events that trigger insurance policies are based on knowledge derived from risk data.
The purpose of this project is to assess the scalability and flexibility of the software-based secure computing techniques in an insurance benchmarking scenario and to demonstrate the range of analytics capabilities they provide. These techniques offer provable technological guarantees that only authorized users can access the global models (fraud and loss models) based on the data of a network of collaborating organizations. The system relies on a fully distributed architecture without a centralized database, and implements advanced privacy-protection techniques based on multiparty homomorphic encryption, which makes it possible to efficiently compute machine-learning models on encrypted distributed data.
P4 (Predictive, Preventive, Personalized and Participatory) medicine is called to revolutionize healthcare by providing better diagnoses and targeted preventive and therapeutic measures. In order to enable effective P4 medicine, DPPH defines an optimal balance between usability, scalability and data protection, and develops required computing tools. The target result of the project will be a platform composed of software packages that seamlessly enable clinical and genomic data sharing and exploitation across a federation of medical institutions across Switzerland. The platform is scalable, secure, responsible and privacy-conscious. It can seamlessly integrate widespread cohort exploration tools (e.g., i2b2 and TranSMART).
Recently, deep neural networks have been applied in many different domains due to their significant performance. However, it has been shown that these models are highly vulnerable to adversarial examples. Adversarial examples are slightly different from the original input but can mislead the target model to generate wrong outputs. Various methods have been proposed to craft these examples in image data. However, these methods are not readily applicable to Natural Language Processing (NLP). In this project, we aim to propose methods to generate adversarial examples for NLP models such as neural machine translation models in different languages. Moreover, through adversarial attacks, we mean to analyze the vulnerability and interpretability of these models.
Customer understanding is a ubiquitous and multifaceted business application whose mission lies in providing better experiences to customers by recognising their needs. A multitude of tasks, ranging from churn prediction to accepting upselling recommendations, fall under this umbrella. Common approaches model each task separately and neglect the common structure some tasks may share. The purpose of this project is to leverage multi-task learning to better understand the behaviour of customers by modeling similar tasks into a single model. This multi-objective approach utilises the information of all involved tasks to generate a common embedding that can be beneficial to all and provide insights into the connection between different user behaviours, i.e. tasks. The project will provide data-driven insights into customer needs leading to retention as well as revenue maximisation while providing a better user experience.
The overall goal of this project is to develop methods for monitoring, modeling, and modifying dietary habits and nutrition based on large-scale digital traces. We will leverage data from both EPFL and Microsoft, to shed light on dietary habits from different angles and at different scales.
Our agenda broadly decomposes into three sets of research questions: (1) Monitoring and modeling, (2) Quantifying and correcting biases and (3) Modifying dietary habits.
Applications of our work will include new methods for conducting population nutrition monitoring, recommending better-personalized eating practices, optimizing food offerings, and minimizing food waste.