📜  频繁项集及其在数据分析中的应用(1)

📅  最后修改于: 2023-12-03 14:58:46.967000             🧑  作者: Mango

频繁项集及其在数据分析中的应用

频繁项集是数据挖掘中的一种方法,它可以帮助我们在大规模数据集中找到出现频率较高的项集。频繁项集分析主要用于关联规则挖掘、商品推荐、数据压缩等方面。

Apriori 算法

Apriori 算法是频繁项集分析中最基本的算法之一。它的核心思想是利用先验知识来减少搜索空间,即如果一个集合是频繁的,那么它的所有子集也一定是频繁的。Apriori 算法由三个部分组成:扫描数据集、产生符合最小支持度要求的候选项集并进行计数、根据候选项集生成频繁项集。

以下是一个简单的实现 Apriori 算法的 Python 代码示例:

def load_data_set():
    return [[1, 3, 4], [2, 3, 5], [1, 2, 3, 5], [2, 5]]

def create_c1(data_set):
    c1 = []
    for transaction in data_set:
        for item in transaction:
            if [item] not in c1:
                c1.append([item])
    c1.sort()
    return list(map(frozenset, c1))

def scan_d(data_set, candidates, min_support):
    sscnt = {}
    for tid in data_set:
        for can in candidates:
            if can.issubset(tid):
                sscnt[can] = sscnt.get(can, 0) + 1
    num_items = float(len(data_set))
    ret_list = []
    support_data = {}
    for key in sscnt:
        support = sscnt[key] / num_items
        if support >= min_support:
            ret_list.insert(0, key)
        support_data[key] = support
    return ret_list, support_data

def apriori_gen(freq_sets, k):
    ret_list = []
    len_freq_sets = len(freq_sets)
    for i in range(len_freq_sets):
        for j in range(i + 1, len_freq_sets):
            l1 = list(freq_sets[i])[:k - 2]
            l2 = list(freq_sets[j])[:k - 2]
            if l1 == l2:
                ret_list.append(freq_sets[i] | freq_sets[j])
    return ret_list

def apriori(data_set, min_support=0.5):
    C1 = create_c1(data_set)
    D = list(map(set, data_set))
    L1, support_data = scan_d(D, C1, min_support)
    L = [L1]
    k = 2
    while (len(L[k - 2]) > 0):
        Ck = apriori_gen(L[k - 2], k)
        Lk, sup_k = scan_d(D, Ck, min_support)
        support_data.update(sup_k)
        L.append(Lk)
        k += 1
    return L, support_data
FP-Growth 算法

FP-Growth 算法是对 Apriori 算法的改进,它使用一种称为 FP 树的数据结构来保存数据集,并使用递归来查找频繁项集。FP-Growth 算法不需要如 Apriori 算法那样对每个子集进行计数,因此更加高效。

以下是一个简单的实现 FP-Growth 算法的 Python 代码示例:

class TreeNode:
    def __init__(self, name, count, parent):
        self.name = name
        self.count = count
        self.parent = parent
        self.children = {}
        self.node_link = None

    def inc(self, count):
        self.count += count

    def display(self, ind=1):
        print('  '*ind, self.name, ' ', self.count)
        for child in self.children.values():
            child.display(ind + 1)

def create_tree(data_set, min_support=1):
    header_table = {}
    for transaction in data_set:
        for item in transaction:
            header_table[item] = header_table.get(item, 0) + data_set[transaction]
    for k in list(header_table.keys()):
        if header_table[k] < min_support:
            del(header_table[k])
    freq_item_set = set(header_table.keys())
    if len(freq_item_set) == 0:
        return None, None
    for k in header_table:
        header_table[k] = [header_table[k], None]
    ret_tree = TreeNode('Null Set', 1, None)
    for transaction, count in data_set.items():
        local_d = {}
        for item in transaction:
            if item in freq_item_set:
                local_d[item] = header_table[item][0]
        if len(local_d) > 0:
            ordered_items = [v[0] for v in sorted(local_d.items(), key=lambda p: -p[1])]
            update_tree(ordered_items, ret_tree, header_table, count)
    return ret_tree, header_table

def update_tree(items, in_tree, header_table, count):
    if items[0] in in_tree.children:
        in_tree.children[items[0]].inc(count)
    else:
        in_tree.children[items[0]] = TreeNode(items[0], count, in_tree)
        if header_table[items[0]][1] == None:
            header_table[items[0]][1] = in_tree.children[items[0]]
        else:
            update_header(header_table[items[0]][1], in_tree.children[items[0]])
    if len(items) > 1:
        update_tree(items[1::], in_tree.children[items[0]], header_table, count)

def update_header(node_to_test, target_node):
    while (node_to_test.node_link != None):
        node_to_test = node_to_test.node_link
    node_to_test.node_link = target_node

def ascend_tree(leaf_node, prefix_path):
    if leaf_node.parent != None:
        prefix_path.append(leaf_node.name)
        ascend_tree(leaf_node.parent, prefix_path)

def find_prefix_path(base_pat, header_table):
    cond_pats = {}
    for p in header_table[base_pat][1]:
        prefix_path = []
        ascend_tree(p, prefix_path)
        if len(prefix_path) > 1:
            cond_pats[frozenset(prefix_path[1:])] = p.count
    return cond_pats

def mine_tree(in_tree, header_table, min_support, prefix=[], freq_item_list=[]):
    big_l = [v[0] for v in sorted(list(header_table.items()), key=lambda p: p[1][0])]
    for base_pat in big_l:
        new_freq_set = prefix.copy()
        new_freq_set.add(base_pat)
        freq_item_list.append(new_freq_set)
        cond_pat_bases = find_prefix_path(base_pat, header_table)
        cond_tree, cond_header = create_tree(cond_pat_bases, min_support)
        if cond_header != None:
            mine_tree(cond_tree, cond_header, min_support, new_freq_set, freq_item_list)
应用场景

频繁项集分析的应用场景非常广泛,以下是其中的一些例子:

  • 关联规则挖掘:挖掘在某些情况下一起出现的物品或情况,例如购物篮分析中同时购买牛奶和纸巾的概率;
  • 可达性查询:查找与某些物品相关的其他物品,例如图论中的广度优先搜索算法;
  • 商品推荐:基于消费历史和喜好等信息预测用户可能会购买的商品,例如推荐相似商品或购买此商品的其他用户也购买了哪些商品等。
参考文献
  1. Han, J., Kamber, M., & Pei, J. (2011). Data mining: concepts and techniques (3rd ed.). Elsevier.