If in control method = "apriori" is used, a very simple rule
  induction method is used. All rules are mined from the transactions
  data set using Apriori with the minimal support found in itemsets.
  And in a second step all rules which do not stem from one of the
  itemsets are removed. This procedure will be in many cases very slow
  (e.g., for itemsets with many elements or very low support).
If in control method = "ptree" is used, the transactions are
  counted into a prefix tree and then the rules are selectively generated
  using the counts in the tree. This is usually faster than the above
  approach.
If in control reduce = TRUE is used, unused items are removed
  from the data before creating rules. This might be slower for large
  transaction data sets. However, for method = "ptree" this is
  highly recommended as the items are further reordered to reduce the
  counting time.
If argument transactions is missing it is assumed that x
  contains a lattice (complete set) of frequent itemsets together with 
  their support counts. Then rules can be induced directly without 
  support counting. This approach is very fast.
  
For transactions, a set different to the data used for creating the
  original itemsets can be used, however, the new set has to conform in terms 
  of items and their order.
This method can be used to produce closed association rules defined by
  Pei et al. (2000) as rules \(X -> Y\) where both \(X\) and \(Y\)
  are closed frequent itemsets. See Example section for code.