Pruning Ratio of the Apriori Algorithm
Order ID:89JHGSJE83839 Style:APA/MLA/Harvard/Chicago Pages:5-10 Instructions:
Pruning Ratio of the Apriori Algorithm
Chapter 5 Problems
Use the following table to answer questions 1 and 2.
1a. Identify a rule that has reasonably high support, but low confidence. Only state the rule; nothing more.
1b. Identify a rule that has low support and high confidence. Only state the rule; nothing more.
1c. Identify a rule that has high support and high confidence. Only state the rule; nothing more.
1d. Identify a rule that has low support and low confidence. Only state the rule; nothing more.
2a. By treating each transaction ID as a market basket, compute the support for each of the following itemsets:
s({e}) =
s({b,d}) =
s({b, d, e}) =
2b. Compute the confidence for the following association rules:
c(b,d e) =
c(e b,d) =
2c. By treating each Customer ID as a market basket and treating each item as a binary variable where appearance of an item = 1 in which the customer bought the item (and a 0 otherwise), compute the support for each of the following itemsets:
s({e}) =
s({b,d}) =
s({b, d, e}) =
2d. Use your results in 2c to the confidence for the following association rules:
c(b,d e) =
c(e b,d) =
Use the following market basket transactions to answer items in question 3.
3a. What is the maximum number of association rules that can be extracted from this data?
note: be sure to show your calculations
- The following market basket transactions were used to create the itemset lattice that is next to the market basket. A candidate is discarded if any one of its subsets is found to be infrequent during the candidate pruning step. Support the Apriori algorithm is applied to the data set with minsup = 30%, any itemset occurring in less than 3 transactions is considered to be infrequent.
(a)
Complete the grid below in order to label each node in the lattice with the following letters:
Note: Begin by entering every itemset from the lattice into the table below. You may need to expand the grid in order to accommodate all of your responses.
N:
If the itemset is not considered to be a candidate itemset y the Apriori algorithm. Two reasons for an itemset not to be considered as a candidate itemset: a) it is
not generated at all during the candidate generation step, or b) it is generated during the candidate generation step but is subsequently removed during the
candidate pruning step because one of its subsets is found to be infrequent.
F:
If the candidate itemset is found to be frequent by the Apriori algorithm.
I: If the candidate itemset is found to be infrequent after support counting.
Itemset Assigned letter (N, F, or I) (b)
What is the percentage of frequent itemsets (with respect do all itemsets in the lattice)?
Answer:
(c)
What is the pruning ratio of the Apriori algorithm on this data set? (Pruning ratio is defined as the percentage of itemsets not considered to be a candidate because (1) they are not generated during candidate generation or (2) they are pruned during the candidate pruning step.)
Answer:
Chapter 6 Problems
Use the following table to answer question #1.
1a. Using an Excel spreadsheet, create a binarized version of the data set with the following categories:
Note: the following are also the itemset names in the spreadsheet)
Sky Fair, Sky Stormy, Status Impaired, Status Sober, Violation None, Violation Speeding, Violation Stop, Violation Signal, Restraint = No, Restraint=Yes, Crash Major, Crash Minor
Paste the Excel spreadsheet into this document here.
1b. What is the maximum width of each transaction in the binarized data?
1c. How did you determine the answer for item 1b?
1d. Assuming that support threshold is 30%, how many candidate and frequent itemsets will be generated?
1e. Again using Excel, create a data set that contains only the following asymmetric binary attributes:
(Weather = Bad, Impaired, Traffic violation = Yes, Restraint = No, Crash Severity = Major).
The itemset headings are: Bad, Impaired, Violdation, NoRestraint, and Major
For Traffic violation, only None has a value of 0. The rest of the attribute values are assigned to 1.
Copy and paste the Excel spreadsheet here:
Assuming that support threshold is 30%, how many candidate and frequent itemsets will be generated?
1f. Compare the number of candidate and frequent itemsets generated in 1(d) and 1(e). What is your analysis?
- Find all the frequent subsequences with support >= 50% given the sequence shown below. Assume there are no timing constraints imposed on the sequence.
Answer:
- For each of the sequences w =< e1e2 . . . ei . . . ei+1 . . . elast > given below, determine whether they are subsequences of the sequence
< {1, 2, 3} {2, 4} {2, 4, 5} {3, 5} {6} >
subjected to the following timing constraints:
mingap = 0 (interval between last event in ei and first event in ei+1 is > 0)
maxgap = 3 (interval between first event in ei and last event in ei+1 is ≤ 3)
maxspan = 5 (interval between first event in e1 and last event in elast is ≤ 5)
ws = 1 (time between first and last events in ei is ≤ 1)
- w =< {1} {2} {3} >
Answer:
- w =< {1, 2, 3, 4} {5, 6} >
Answer:
Answer:
- w =< {1} {2, 4} {6} >
Answer:
- w =< {1, 2} {3, 4} {5, 6} >
Answer:
Pruning Ratio of the Apriori Algorithm
RUBRIC
Excellent Quality
95-100%
Introduction 45-41 points
The background and significance of the problem and a clear statement of the research purpose is provided. The search history is mentioned.
Literature Support
91-84 points
The background and significance of the problem and a clear statement of the research purpose is provided. The search history is mentioned.
Methodology
58-53 points
Content is well-organized with headings for each slide and bulleted lists to group related material as needed. Use of font, color, graphics, effects, etc. to enhance readability and presentation content is excellent. Length requirements of 10 slides/pages or less is met.
Average Score
50-85%
40-38 points
More depth/detail for the background and significance is needed, or the research detail is not clear. No search history information is provided.
83-76 points
Review of relevant theoretical literature is evident, but there is little integration of studies into concepts related to problem. Review is partially focused and organized. Supporting and opposing research are included. Summary of information presented is included. Conclusion may not contain a biblical integration.
52-49 points
Content is somewhat organized, but no structure is apparent. The use of font, color, graphics, effects, etc. is occasionally detracting to the presentation content. Length requirements may not be met.
Poor Quality
0-45%
37-1 points
The background and/or significance are missing. No search history information is provided.
75-1 points
Review of relevant theoretical literature is evident, but there is no integration of studies into concepts related to problem. Review is partially focused and organized. Supporting and opposing research are not included in the summary of information presented. Conclusion does not contain a biblical integration.
48-1 points
There is no clear or logical organizational structure. No logical sequence is apparent. The use of font, color, graphics, effects etc. is often detracting to the presentation content. Length requirements may not be met
You Can Also Place the Order at www.collegepaper.us/orders/ordernow or www.crucialessay.com/orders/ordernow Analyze the Water Footprint Results