canopy算法 java_mahout源码canopy算法分析之二CanopyMapper
首先更正一點,前篇博客里面說到一個Canopy的測試的例子里面有這樣的一句代碼:
buildClusters(Configuration conf, Path input, Path output, DistanceMeasure measure, double t1, double t2, double t3, double t4, int clusterFilter, boolean runSequential)
我以前認為clusterFilter是分類的數目,其實應該是每個分類中至少含有的(樣本數+1)才對,而非分類數,因為Canopy就是要得到分類數目的。
下面分析CanopyMapper:
首先把CanopyMapper改編為可以不依靠hadoop可以跑的代碼,即純java代碼,如下:
package mahout.test.canopy.debug;import java.util.ArrayList;import java.util.Collection;import java.util.HashMap;import java.util.List;import java.util.Map;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.io.Text;import org.apache.mahout.clustering.canopy.Canopy;import org.apache.mahout.clustering.canopy.CanopyClusterer;import org.apache.mahout.clustering.canopy.CanopyConfigKeys;import org.apache.mahout.common.distance.DistanceMeasure;import org.apache.mahout.common.distance.ManhattanDistanceMeasure;import org.apache.mahout.math.RandomAccessSparseVector;import org.apache.mahout.math.Vector;import org.apache.mahout.math.VectorWritable;import com.google.common.collect.Lists;public class CanopyDebug {/** * canopy 測試程序 * @author fansy * time 2013/7/21 22:59 * edited at 2013/7/22 21:38 */private static final Collection canopies = Lists.newArrayList();private static CanopyClusterer canopyClusterer;private static int clusterFilter;private static int num=0;private static Map center=new HashMap();public static void main(String[] args) { // 初始化DistanceMeasure measure=new ManhattanDistanceMeasure();double t1=30.2;double t2=5.3;int clusterFilter =1;Configuration conf=init(measure,t1,t2,clusterFilter );// setup() 函數setup(conf);// map()函數 map();// cleanup() 函數Map result=cleanup();System.out.println("done..."+result); }/** * 初始化 * @param measure * @param t1 * @param t2 * @param clusterFileter:每個canopy至少含有的樣本數 * @return */public static Configuration init(DistanceMeasure measure,double t1,double t2,int clusterFileter){Configuration conf=new Configuration();double t3=t1;double t4=t2; conf.set(CanopyConfigKeys.DISTANCE_MEASURE_KEY, measure.getClass() .getName()); conf.set(CanopyConfigKeys.T1_KEY, String.valueOf(t1)); conf.set(CanopyConfigKeys.T2_KEY, String.valueOf(t2)); conf.set(CanopyConfigKeys.T3_KEY, String.valueOf(t3)); conf.set(CanopyConfigKeys.T4_KEY, String.valueOf(t4)); conf.set(CanopyConfigKeys.CF_KEY, String.valueOf(clusterFilter)); return conf;}/** * map setup() 函數 * @param conf 輸入初始參數 */public static void setup(Configuration conf){canopyClusterer = new CanopyClusterer(conf); clusterFilter = Integer.parseInt(conf.get( CanopyConfigKeys.CF_KEY));}/** * 獲得初始數據 --> * @return */public static List makeInData(){List list=new ArrayList();VectorWritable vw=null;Vector vector=null;for(int i=0;i<10;i++){vw=new VectorWritable();vector=new RandomAccessSparseVector(3);vector.set(0, i%3*(i%3)*(i%3)*(i%3)+Math.random());vector.set(1, i%3*(i%3)*(i%3)*(i%3)+Math.random());vector.set(2, i%3*(i%3)*(i%3)*(i%3)+Math.random());vw.set(vector);list.add(vw);}return list;}/** * 模仿Mapper的map函數 */public static void map(){List vwList=makeInData2();for(VectorWritable point: vwList){canopyClusterer.addPointToCanopies(point.get(), canopies);}}/** * 模仿Mapper的cleanup函數 * @return */public static Map cleanup(){for (Canopy canopy : canopies) { canopy.computeParameters(); if (canopy.getNumObservations() > clusterFilter) { center.put(new Text("centroid"+num++), new VectorWritable(canopy .getCenter())); } }return center;}/** * 固定輸入數據,產生輸入數據的第二種方式,方便調試 * @return */public static List makeInData2(){List list=new ArrayList();VectorWritable vw=null;Vector vector=null;for(int i=0;i<10;i++){vw=new VectorWritable();vector=new RandomAccessSparseVector(3);vector.set(0, i%3*(i%3)*(i%3)*(i%3)+1);vector.set(1, i%3*(i%3)*(i%3)*(i%3)+1);vector.set(2, i%3*(i%3)*(i%3)*(i%3)+1);vw.set(vector);list.add(vw);}return list;}}
利用上面的代碼可以直接進行調試,調試可以看到每一步運行的結果,很方便理解該算法。下面就結合實例進行說明:
輸入數據(VectorWritable格式):
[1.0,1.0,1.0][2.0,2.0,2.0][17.0,17.0,17.0][1.0,1.0,1.0][2.0,2.0,2.0][17.0,17.0,17.0][1.0,1.0,1.0][2.0,2.0,2.0][17.0,17.0,17.0][1.0,1.0,1.0]
通過makeData2()函數即可獲得上面轉換成VectorWritable的數據,然后進入map函數,其中的for循環即是仿造CanopyMapper中的map函數了。map函數里面使用到的函數為:addPointToCanopies,打開CanopyCluster類可以看到這個函數的主要內容如下;public void addPointToCanopies(Vector point, Collection canopies) { boolean pointStronglyBound = false; for (Canopy canopy : canopies) { double dist = measure.distance(canopy.getCenter().getLengthSquared(), canopy.getCenter(), point); if (dist < t1) { if (log.isDebugEnabled()) { log.debug("Added point: {} to canopy: {}", AbstractCluster.formatVector(point, null), canopy.getIdentifier()); } canopy.observe(point); } pointStronglyBound = pointStronglyBound || dist < t2; } if (!pointStronglyBound) { if (log.isDebugEnabled()) { log.debug("Created new Canopy:{} at center:{}", nextCanopyId, AbstractCluster.formatVector(point, null)); } canopies.add(new Canopy(point, nextCanopyId++, measure)); } }
針對第一行的樣本[1.0,1.0,1.0]因為canopies為空,所以直接 執行 canopies.add(new Canopy(point,nextCanopyId++,measure));這樣就生成了第一個canopy,先看下Canopy這個類,其父類為DistanceMeasureCluster,父類為AbstractCluster,其屬性有:private int id; private long numObservations; private long totalObservations; private Vector center; private Vector radius; // the observation statistics private double s0; private Vector s1; private Vector s2;
在這個函數里面canopy變化的屬性只有 s0,s1,s2 ,s0代表這個canopy里面的樣本數,s1代表所有樣本的對應維度相加,s2代表所有樣本所有維度先平方再對應相加;
針對第二行樣本[2.0,2.0,2.0],dist=3(曼哈頓距離)
針對第三行樣本[17,17,17]因為dist=16*3>t1=30.2所以直接執行canopies.add()方法,即又添加了一個canopy;后面的樣本以此類推;
在CanopMapper類中最后執行的方法為cleanup函數,這里也仿造了一個cleanup函數,其實cleanup函數的主要作用就是一個過濾和設置值的過程函數對應為canopy.computeParameters(),過濾即是說一個canopy中的樣本數必須要>clusterFilter才行,設置值即為設置radius和center值,具體代碼參見:
setNumObservations((long) getS0()); setTotalObservations(getTotalObservations() + getNumObservations()); setCenter(getS1().divide(getS0())); // compute the component stds if (getS0() > 1) { setRadius(getS2().times(getS0()).minus(getS1().times(getS1())).assign(new SquareRootFunction()).divide(getS0())); } setS0(0); setS1(center.like()); setS2(center.like());
center的值設置為s1/s0,radius的值:A=[s2*s0-s1.*s1],(前一個是乘以,后面是點乘)然后把A的每一個維度的值開方并除以s0即可得到radius。
分享,快樂,成長
本文轉載自:CSDN博客
歡迎加入我愛機器學習QQ14群:336582044
微信掃一掃,關注我愛機器學習公眾號
總結
以上是生活随笔為你收集整理的canopy算法 java_mahout源码canopy算法分析之二CanopyMapper的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: java版本的getorcreate_J
- 下一篇: java中文版src_java Web开