Thesis: Yagnik, T. (2022). “”, PhD Thesis, Æß²ÊÖ±²¥, UK
Ibrahim, A.B., Yagnik T., Mohammed, K. (2022). “Robustness of k-Anonymization Model in Compliance with General Data Protection Regulation” In: The 2022 5th International Conference on Computing and Big Data (ICCBD 2022), Shanghai, China, December 2022.
The advancement in technology and the emergence of big data and the internet of things (IoT), individuals (data subjects) tend to suffer from privacy breach of various types that has led to a lot of damages to both data subjects and brands. These and other issues about data privacy breach led the European Union to come up with a much stringent regulations that will serve as a deterrent to businesses or organizations that handle data. This gave birth to the General Data Protection Regulation (GDPR) in 2018 which replaced the previous 1995 Data Protection Directive in Europe. This research examined the robustness of k-anonymity in compliance with GDPR regulations at varying k-values (5,10,50, and 100) using the 1994 USA Census Bureau Data referred to as the adult dataset. Various measures were used to determine which k-value meets the GDPR criteria and the findings revealed the best anonymizing threshold complies with the GDPR criteria that prevents information loss (which determines data utility), prosecutor re-identification risk percentage and attacker models (prosecutor, journalist and marketer model).
YAGNIK, T., CHEN, F., and KASRAIAN, L. (2021). In: 2021 5th International Conference on Cloud and Big Data Computing (ICCBDC 2021), Liverpool United Kingdom, August 2021. New York: ACM.
Quality of Service (QoS) has been identified as an important attribute of system performance of Data Stream Management Systems (DSMS). A DSMS should have the ability to allocate physical computing resources between different submitted queries and fulfil QoS specifications in a fair and square manner. System scheduling strategies need to be adjusted dynamically to utilise available physical resources to guarantee the end-to-end quality of service levels. In this paper, we present a proactive method that utilises a multi-level component profiling approach to build prediction models that anticipate several QoS violations and performance degradations. The models are constructed using several incremental machine learning algorithms that are enhanced with ensemble learning and abnormal detection techniques. The approach performs accurate predictions in near real-time with accuracy up to 85% and with abnormal detection techniques, the accuracy reaches 100%. This is a major component within a proposed QoS-Aware Self-Adapting Data Stream Management Framework.
YAGNIK, T., CHEN, F., and KASRAIAN, L. (2021). In: The Seventh International Conference on Big Data, Small Data, Linked Data and Open Data, ALLDATA 2021, Porto, April 2021. Portugal: IARIA XPS Press.
The last decade witnessed plenty of Big Data processing and applications including the utilisation of machine learning algorithms and techniques. Such data need to be analysed under specific Quality of Service (QoS) constraints for certain critical applications. Many frameworks have been proposed for QoS management and resource allocation for the various Distributed Stream Management Systems (DSMS), but lack the capability of dynamic adaptation to fluctuations in input data rates. This paper presents a novel QoS-Aware, Self-Adaptive, Resource Utilisation framework which utilises instantaneous reactions with proactive actions. This research focuses on the load monitoring and analysis parts of the framework. By applying real-time analytics on performance and QoS metrics, the predictive models can assist in adjusting resource allocation strategies. The experiments were conducted to collect the various metrics and analyse them to reduce their dimensions and identify the most influential ones regarding the QoS and resource allocation schemes.
Z. Yang, X. Qin, Y. Yang and T. Yagnik, "," 2013 International Conference on Computer Sciences and Applications, Wuhan, China, 2013, pp. 674-680, doi: 10.1109/CSA.2013.163.
Trust service is a very important issue in cloud computing, and a cloud user needs a trust mechanism in selecting a reliable cloud service provider. Many trust technologies such as SLA, cloud audit, self-assessment questionnaire, accreditation, and so on, are proposed by some research organizations like CSA. However, all of these just provide a initial trust and have many limitations. A hybrid trust service architecture for cloud computing is proposed in this paper, which primary includes two trust modules named the initial trust module and trust-aided evaluation module. After an initial and a basic trust is established in initial trust module, the trust-aided evaluation module will be used to verify the service provider dependable further. The approaches of D-S evidence theory and Dirichlet distribution PDF are introduced to compute the trust degree value as well. The hybrid service architecture can obtain more effects on selecting the reliable service provider and promote the computing efficiency greatly.