5th International Conference on Machine Learning, IOT and Blockchain (MLIOB 2024)

February 17 ~ 18, 2024, Dubai, UAE

Accepted Papers


Internet of Things and Blockchain: Directions, Issues, Potential

MhAntropinos Dhmiourgos, Department of Computer Engineering, University of Sciences, Tirgistan

ABSTRACT

This paper explores the intersection of Internet of Things (IoT) and Blockchain technologies, examining their potential synergies, challenges, and future directions. IoT, a network of interconnected devices capable of data collection and exchange, has revolutionized various industries. Blockchain, known for its decentralized and immutable ledger, promises enhanced security and transparency in data transactions. The paper analyzes the integration of these technologies, focusing on how blockchain fortifies IoTs security, ensures data integrity, and fosters trust in interconnected systems. It discusses emerging design paradigms and innovative approaches in combining blockchain with IoT, addressing scalability, privacy, and interoperability challenges. Real-world examples showcasing the successful integration of blockchain in IoT applications are highlighted. Furthermore, the paper examines recent trends, developments, and foundational principles shaping the landscape of these technologies, offering insights into their evolving nature and potential implications for future advancements.

KEYWORDS

AI, IoT, Blockchain.


Prescription Dispense Using Smart Contract in Saudi Arabia

Atheer Hussain Mashhour and Hamed Alqahtani, Department of Information System, King Khalid university, Abha city, Saudi Arabia

ABSTRACT

Medical institutions distribute regulated medications to patients and persist in employing manual documentation methods to record the production, distribution, prescription, administration, and disposal of controlled substances. Consequently, this reliance on handwritten paperwork leads to operational inefficiencies. Of noteworthy concern is the potential for this practice to facilitate the circumvention or manipulation of the system, thereby enabling the issuance of undocumented or non-standardized prescriptions that could potentially harm patients.The central thesis is that smart contracts are a solid foundation for any blockchain development project, by describing the design and implementation of the prescription dispense approach that manages different participants in related sectors.Moreover, designing secure smart contracts required to privacy and security of the healthcare system.This study presents a proposal and implementation for the prescribed immutable and authenticated prescription for patients suffering from chronic disease and need ongoing dispense on regular bases.By employing smart contracts upon blockchain, I attempt to illuminate the benefits of using this technology in the prescription system in Saudi Arabia specifically and the ability of smart contracts to provide security for applications in general.The findings contribute in several ways to our understanding of smart contracts and provide a basis for building a secure prescription dispenser approach that serves the healthcare sector.

KEYWORDS

smart contracts, blockchain, prescription.


Rivcoin: an Alternative, Integrated, Cefi/defi-vaulted Cryptocurrency

Roberto Rivera, Guido Rocco, Massimiliano Marzo and Enrico Talin, RIV Capital, Swizerland

ABSTRACT

This whitepaper introduces RIVCoin, a cryptocurrency built on Cøsmos, fully stabilized by a diversified portfolio of both CeFi and DeFi assets, available in a digital, non-custodial wallet called RIV Wallet, that aims to provide Users an easy way to access the cryptocurrency markets, compliant to the strictest AML laws and regulations up to date. The token is a cryptocurrency at any time stabilized by a basket of assets: reserves are invested in a portfolio composed long term by 50% of CeFi assets, comprised of Fixed Income, Equity, Mutual and Hedge Funds and 50% of diversified strategies focused on digital assets, mainly staking and LP farming on the major, battle tested DeFi protocols. The cryptocurrency, as well as the dollar before Bretton Woods, is always fully stabilized by vaulted proof of assets: it is born and managed as a decentralized token, minted by a Decentralized Autonomous Organization, and entirely stabilized by assets evaluated by professional independent third parties. Users will trade, pool, and exchange the token without any intermediary, being able to merge them into a Liquidity Pool whose rewards will be composed by both the trading fees and the liquidity rewards derived from the reserve’s seigniorage, that should affect the token’s price movement. In the long run, RIVCoin holders will also have access to an ecosystem of added-value services that will further increase the token’s value. Our cryptocurrency is built to be Proof of Stake (PoS - energy saver), Proof of Asset (PoA - stabilized) and Proof of Liquidity (PoL – market provided). RIVCoin allows the User to enter the cryptocurrency market easily, without experiencing unjustified huge price depreciations, being the reserves pledged in last resort in favour of the users. Moreover, using RIV Wallet will allow the User to perform KYC/AML procedures that comply with the latest international regulatory FATF-GAFI VASP framework. The Liquidity Pool fair incentive mechanism is executed such that it will force a de facto democratic redistribution of wealth: Users who wish and decide to pool RIVCoin in the Liquidity Pool will receive additional RIVCoin for themselves, and new RIVCoin are minted when the reserves increase in value or in case of purchase of new RIVCoin. All wealthier Users are then accepting a redistribution of income, to the benefit of those who have purchased less tokens. In (Cooperative) Game Theory, maximization of the economic benefit of the ecosystem is achieved when players’ incentives are perfectly aligned. The proposed model allows for alignment of incentives: decreasing the risk exposure by wealthier Users, but implicitly increasing that of smaller ones to a level perceived by them as still sustainable and never creating ultra-speculative positions (according to H.P. Minsky definition, “when the incoming flows are not sufficient even to pay interest, so that it is necessary to apply for new loans both to repay the principal portion of the initial loan but also to honor the payment of the related interest"). In other words, wealthier Users stabilize the risk associated with the market portfolios of the reserves invested in Centralized and Decentralized Finance, without falling into the “bet scheme". Users indirectly benefit from the access to the rewards of sophisticated cryptocurrency portfolios hitherto precluded to them, as well as having access to a real redistribution of wealth, without this turning into a disadvantage for the wealthy User, who benefits from the greater stability created by the huge influx of smaller Users. Therefore, the progressive growth becomes additional value that tends to stabilize over time, optimizing RIVCoin on the systemic risk level.

KEYWORDS

Blockchain, Cryptocurrency, Asset-Referenced Token.


An Autonomous System to Enhance Urban Clean Linessby Identifying and Collecting Trash Using Ai and Machine Learning

Zhuowen Wang1, Jonathan Sahagun2, 1Santa Margarita Catholic High School, 22062 Antonio Pkwy, Rancho Santa Margarita, CA92688, 2Computer Science Department, California State Polytechnic University, Pomona, CA

ABSTRACT

The HawkEyes system, featuring a sophisticated robotic car, is a key innovation in modern waste management. Thisautonomous vehicle is adeptly equipped with advanced AI and computer vision technology, enabling preciseidentification and categorization of dif erent waste types. Optimized for city environments, the car navigatesautonomously, relying on an AI detection system. Built to withstand a variety of urban conditions, the robotic car isrobust and adaptable. Its suite of sensors and cameras are strategically placed, enhancing its ability to detect wasteand maneuver ef ectively in urban areas. As it patrols city streets, the vehicle ef iciently identifies locations withaccumulated waste, supporting targeted and ef ective cleanup ef orts. In its current iteration, HawkEyes stands asan intelligent and practical solution to urban waste challenges. It fuses technological innovation with real-worldapplication, not only improving the ef iciency of waste collection but also contributing to environmental conservation ef orts. This robotic car demonstrates the transformative role of AI and robotics in sustainable wastemanagement, showcasing a new frontier in city maintenance and ecological care.

KEYWORDS

Image Recognition, Machine Learning Algorithms, Autonomous, Environmental Sustainability.


A Environmental Monitoring System Empowering Users to Enhance Air Quality Using Smart Sensing

Siying Wang1, Yujia Zhang2, 1Choate Rosemary Hall, 333 Christian St, Wallingford, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768

ABSTRACT

This paper addresses the critical need for real-time air quality monitoring through the development and implementation of the Air Pendent app and device [4]. Recognizing the escalating concerns surrounding carbon dioxide emissions, climate change, and indoor air quality, our solution integrates cutting-edge technology to empower users with immediate, personalized insights into their surroundings [5]. The challenges of interoperability, sensor accuracy, and community engagement were systematically addressed through experiments involving ten diverse participants. Results revealed high user satisfaction, consistent sensor accuracy, and varying community participation rates. While optimization for Android devices and cross-platform performance enhancements are recommended, the Air Pendent project emerges as a promising tool for fostering environmental awareness and community-driven solutions [6]. This comprehensive and user-centric approach provides a tangible means for individuals to actively engage with and positively impact their immediate environment, positioning the solution as an essential tool for a sustainable future [7].

KEYWORDS

Air quality, CO2 detector, Precise measurement, electrical signals, Sensor.


Impact of Decentralized Autonomous Organizations (Dao) on Society 5.0

Rabih Amhaz1, 2 and Cedric Bobenrieth1, 2 and Marlene Marz2, 3, 1ICube –Engineering science, computer science and imaging laboratory, UMR 7357, Strasbourg University, 67000, Strasbourg, France, 2Icam, site de Strasbourg-Europe, 67300 Schiltigheim, France, 3University of Mittweida, Technikumpl. 17, 09648 Mittweida, Germany

ABSTRACT

Decentralized autonomous organizations (DAOs) are not a novel social phenomenon; rather, they draw inspiration from self-organizing systems and are often regarded as digital counterparts of cooperatives (Co-ops), wherein members fully own and govern the organization. The advancement of digital solutions for decentralization, such as Distributed Ledger Technology (DLT), along with the emergence of the third generation of websites (Web3) and platforms, has propelled DAOs to a new echelon. As such, DAOs represent the next generation of organizations, aptly referred to as Organization 5.0 in the context of Society 5.0. The objective of this paper is to provide a comprehensive overview of the evolutionary trajectory of decentralized autonomous organizations and their classification. The advent of Ethereum in 2015 enabled the realization of DAOs, with "The DAO" being the first large-scale example established in 2016 as a decentralized venture fund within the Ethereum ecosystem. Over time, DAOs have expanded their scope beyond fundraising and have evolved to serve various purposes. To provide a comprehensive context, the paper presents background information on the evolution of blockchain applications and discusses ethical considerations related to DAOs. In order to identify the most common categories of DAOs, this paper consults various DAO explorers and include, for each identified category, a descriptive example of a DAO. Finally, the paper concludes by offering an outlook on the future of DAOs.

KEYWORDS

DAO, Blockchain, Web3, DLT, organisations, society5.0 &social impact.


Item Enhanced Diversification in the Recommendation System Using Graph Neural Network

Naina Yadav1, 3 Ramakant kumar2, 3 and Anil Kumar Singh3, 1SCSET Bennett University Greater Noida Uttar Pradesh, 2GLA University Mathura Uttar Pradesh, 3Indian Institute of Technology (BHU) Varanasi

ABSTRACT

A recommendation system is a set of programs that utilize different methodologies for relevant item selection for the user. Graph neural networks have been extensively used in recent years to improve the quality of recommendations across all domains. A general recommendation system’s main goal is to recommend items to the user accurately, and it frequently prioritizes items that are well-liked or main stream. If the model concentrates only on one specific item category from the users’ past preferences, then recommendations for the target user will become too obvious. Diversity in recommendations is introduced to address this problem. We proposed a model IG-DivRS (item-enhanced graph neural network for a diversified recommendation system). Our proposed model uses a Graph Neural Network (GNN) with the user’s interacted and non-interacted item history for diversified recommendation generation. The novelty of our proposed model is to explore the effect of non-interacted items on the target user for diversified recommendation generation. Instead of selecting random non-interacted items for the target user, we apply the DPP(Determinantal Point Process) algorithm to select the non-interacted item appropriately. The detailed experimental analysis shows that our model ID-DivRS outperforms the state-of-the-art model in accuracy and diversity.

KEYWORDS

Diverse Recommendation, Graph Neural Network, Determinantal Point Process, Accuracy Diversity Trade-of.


Legal Documents Drafting With Fine-tuned Pre-trained Large Language Model

Chun-Hsien Lin and Pu-Jen Cheng, Department of Computer Science & Information Engineering, National Taiwan University, Taipei, Taiwan

ABSTRACT

With the development of large-scale Language Models (LLM), fine-tuning pre-trained LLM has become a mainstream paradigm for solving downstream tasks of natural language processing. However, training a language model in the legal field requires a large number of legal documents so that the language model can learn legal terminology and the particularity of the format of legal documents. The typical NLP approaches usually need to rely on many manually annotated data sets for training. However, in the legal field application, it is difficult to obtain a large number of manually annotated data sets, which restricts the typical method applied to the drafting legal documents task. The experimental results of this paper show that not only can we leverage a large number of annotation-free legal documents without Chinese word segmentation to fine-tune a large-scale language model, but more importantly, it can fine-tune a pre-trained LLM on the local computer to achieve the generating legal document drafts task, and at the same time achieve the protection of information privacy and to improve information security issues.

KEYWORDS

LLM, Legal Document Drafting, Fine-tuning Large Language Models, Text Generation.


The Application of Artificial Intelligence (AI) in Cybersecurity

Benjamin Ugwu ,Department of Information Technology and Computer Engineering, Atlantis University, Miami, Florida, USA

ABSTRACT

As cyber threats continue to grow in sophistication and complexity, traditional cybersecurity measures face challenges in keeping pace. One major challenge is in the area of early detection and mitigation of threats in large-scale enterprise networks. The year 2022 and 2023 have seen an explosion in AI tools in different industries bringing cutting-edge innovation and capabilities to these industries. This research paper aims to find if AI is a feasible solution in early detection and mitigation of cybersecurity threats in large-scale enterprise networks. Over forty-one (41) sources including government reports, textbooks and peer reviewed journals of which twenty-six (26) were selected were used to gather and analyse information on the subject matter of this research paper. The wealth of knowledge gathered in the course of this research shows that Artificial intelligence (AI) is a feasible option to solve the crucial cybersecurity issue of early detection and mitigation of threats facing large-scale enterprise networks. The author believes that the implementation of AI solutions in large scale enterprises will not only help to solve the security issues facing these enterprises but will also encourage more large-scale enterprises to adopt AI based cybersecurity solutions and partake in its numerous security benefits.

KEYWORDS

Cybersecurity, Information Security, Artificial Intelligence (AI), Machine Learning (ML).


Biglip: a Pipeline for Building Data Sets for Lip-reading Models

Umar Jamil, University of Leeds, UK

ABSTRACT

Lip-reading, the process of deciphering text from visual mouth movements, has garnered significant research attention. While numerous data sets exist for training lip-reading models, their coverage of diverse languages remains limited. In this paper, we introduce an innovative pipeline for constructing data sets tailored to lipreading models, leveraging web-based videos. Notably, this pipeline is the first of its kind to be made publicly available1. By employing this pipeline, we successfully compiled a data set comprising Italian videos—a previously unexplored language for lipreading research. Subsequently, we utilized this data set to train two lip-reading models, thereby highlighting the strengths and weaknesses of employing wild-sourced videos (e.g., from YouTube) for lip-reading model training. The proposed pipeline encompasses modules for audio-video synchronization, audio transcription, alignment, cleaning, and facilitates the creation of extensive training data with minimal supervision. By presenting this pipeline, we aim to encourage further advancements in lip-reading research, specifically in the domain of multilingual data sets, thus fostering more comprehensive and inclusive lip-reading models.


Enhancing Mathematical Explanation Generation: a Comparative Study of Fine-tuned Large Language Models

Youmna Moussa, School of Computing, University of Leeds, Leeds, United Kingdom

ABSTRACT

AI-enhanced adaptive learning systems that provide personalized and adaptable learning environments foster greater engagement and academic achievement. This correlation underscores the importance of tailored educational approaches in enhancing student involvement and success. Consequently, numerous Intelligent Tutoring Systems (ITS) have been effectively implemented to improve teaching methods and enhance students learning experiences across various domains and applications. These systems have proven especially useful in teaching technical subjects, such as mathematics, by assisting students in acquiring mathematical knowledge and skills. Despite the significant advancements in mathematical language processing driven by the advent of Large Language Models (LLMs), fine-tuning these expansive models for specialized mathematical tasks presents notable challenges. This is often a consequence of the scarcity of labeled datasets in this domain. To bridge this gap, this study curates a unique dataset, comprising around 4,000 formulas from mathematics, physics, and engineering, which features computationally rearranged equations paired with expert-reviewed, student-friendly explanations. Developed using the SymPy Python library, it serves as a comprehensive resource for evaluating language models in mathematical education. Furthermore, utilizing both the mathematical LaTeX and the string format of equations, this research fine-tunes several LLMs to highlight the differences between the models’ explanation generation capabilities and performance in generating accurate and informative mathematical explanations. Additionally, the comparison between outputs generated using LaTeX and string representations offers insights into the effectiveness of each format in conveying mathematical semantics. This research study contributes to the field of automated mathematics education, providing a deeper understanding of the potential of language models in creating explanations for intelligent tutoring systems. Our key achievements include the tbs17/MathBERT-custom models enhanced Rouge-1 score, which marginally increased from 0.9678 in LaTeX format to 0.9679 in string format. More notably, the AnReu_math_pretrained_bert models Rouge-1 score significantly improved from 0.9116 when processing LaTeX to 0.9479 with string representations. This underscores the superior handling of string formats by these models, leading to improved explanation generation in the field of mathematics.

KEYWORDS

Mathematics Education,Mathematical Language Processing, Large Language Models Fine-Tuning.