International Journal of Applied Information Systems
Foundation of Computer Science (FCS), NY, USA
|
Volume 12 - Issue 45 |
Published: July 2024 |
Authors: Abdulkader M. Al-Badani, Abdualmajed A. Al-Khulaidi |
![]() |
Abdulkader M. Al-Badani, Abdualmajed A. Al-Khulaidi . Developing an Efficient Mining of Frequent Itemsets using OFIM for Big Data. International Journal of Applied Information Systems. 12, 45 (July 2024), 16-22. DOI=10.5120/ijais2024451980
@article{ 10.5120/ijais2024451980, author = { Abdulkader M. Al-Badani,Abdualmajed A. Al-Khulaidi }, title = { Developing an Efficient Mining of Frequent Itemsets using OFIM for Big Data }, journal = { International Journal of Applied Information Systems }, year = { 2024 }, volume = { 12 }, number = { 45 }, pages = { 16-22 }, doi = { 10.5120/ijais2024451980 }, publisher = { Foundation of Computer Science (FCS), NY, USA } }
%0 Journal Article %D 2024 %A Abdulkader M. Al-Badani %A Abdualmajed A. Al-Khulaidi %T Developing an Efficient Mining of Frequent Itemsets using OFIM for Big Data%T %J International Journal of Applied Information Systems %V 12 %N 45 %P 16-22 %R 10.5120/ijais2024451980 %I Foundation of Computer Science (FCS), NY, USA
Big data mining is challenging.An effective algorithm and computer software are needed to solve problems while working with large datasets.The FP Growth Algorithm takes a long time to compute and extract results, and it demands a lot of memory.Right now, the FP_Growth algorithm is among the finest methods for mining frequent itemsets.The transaction dataset is used to create a tree structure, which is then recursively traversed to extract frequently occurring itemsets using a depth first search strategy.Additionally, creating an FP_tree requires time and suffers from growing larger FP_trees and producing a high number of frequent itemsets.In this paper, the suggest alterations to the FP_Growth algorithm's operation.With our usage of the proposed matrix OFIM to build a very compact FP-tree, the recommended approach would cut mining time and the number of regularly generated items, giving a considerable reduction in decision_making in large datasets.Furthermore, our technique significantly improves its speed in handling large datasets by limiting the amount of items that are produced often, thereby optimizing memory use.