topic discovery and future trend prediction in scholarly
play

Topic Discovery and Future Trend Prediction In Scholarly Networks - PowerPoint PPT Presentation

Topic Discovery and Future Trend Prediction In Scholarly Networks Interim Report 515030910600 Introduction & Existing works Proper resource allocation on research requires accurate forecasting for the future research


  1. Topic Discovery and Future Trend Prediction In Scholarly Networks Interim Report 515030910600 ���

  2. Introduction & Existing works Proper resource allocation on research requires accurate forecasting for the future research activities.

  3. Judgmental depend on the subjective judgments of experts Analysis Forecasting Numerical Extrapolate historical data through a Analysis specific function

  4. Co-citation Calculate the same Clustering Calculate the elements in a Area frequency of words document to group to determine its the documents into trends Co-word a certain category Analysis Numerical Analysis Use NGD to Obtain Calculate the arrange them Bibliometric keywords Technology into and abstracts growth score taxonomies Where Y means all the observation period ∑ & '×) * !"# = y means the year ∑ & ) * f means the term frequency

  5. Theory & Researching Plan Analyze the time series and find similar pattern from them

  6. Principle Sliced Factor Component Inverse Analysis Analysis Regression Analysis of Time Series Partial Blind Least Source …… with Machine Square Separation Learning – Single best method is not always applicable for different kind of data. – Some best prediction results should be taken into account, and then combine them into an ensemble system to get the final forecast result

  7. Measure the difference between each point of the series Euclidean Distance Time Series Similarity Allows acceleration-deceleration of Dynamic signals along the time dimension Time Warping

  8. – Select research topics from the dataset and categorize them – Construct time series based on the topic’s frequency each year – Construct training and testing matrix Research – Predict the time series using ANN & SVM with various Plan parameters, as well as ARIMA and logistic model – select the best models of the training data which is most similar with the testing data – Combine prediction result using average, median, performance based ranking

  9. Experiments

  10. Via link.springer.com Datasets

  11. Time Series

  12. Neural Hidden node set = 1 / 2 / 3 / 5 / 10 Network Comparison Kernel RBF with sigma’s width among = 0.01 / 0.1 / 0.5 / 1 / 2 / 5 Individual SVR Kernel polynomial with degree predictors = 1 / 2 / 3 – The first experiment in this study is to compare the performance of each predictor – there are totally 14 models by varying the parameters of those predictors.

  13. Forecasting Performance using MSE among Individual Models on 14 Time Series Comparison among Individual predictors – Single best method is not always applicable for different kind of data. Most predictors have good prediction for some time series but not for the others

  14. MSE on Combination of Methods using Euclidean, DTW and without Similarity Combination of Models using Similarity Measure – The second experiment in this study is to select the predictors that perform best on training time series similar to testing time series to be predicted – The best model in validation does not necessarily always imply the best model in testing – As the number of model is increased, the MSE decreases up until about half of the total number of model

  15. Average Performance of Forecast Combination using Models Selected by Euclidean and DTW Similarity Compared to the one using best and all Models without Similarity Measure Combination of Models using Similarity Measure – Using combination of methods selected based on the similarity between training and testing data may lead into better prediction result compared to the combination of all methods – Among those three model selections, Euclidean similarity is the one that may yield the lowest MSE

  16. The most often selected models for the first eight models of all series The most often use Models – Neural Networks are chosen more often as best models than the SVRs – Among the NNs, the moderate number of hidden node, such as 3 and 5, are more preferable – Among the SVRs, the polynomial kernel of degree 3 and RBF kernel of width 1, which are more suitable for fluctuating pattern, are closely following the NNs.

  17. Conclusion

  18. – The combination of methods selected based on the similarity between training and testing data may perform better compared to the combination of all methods – The optimum number of models to combine is about Conclusion fifty percent of the number of models – Smaller number of models to combine may not provide enough diversification of method’s capabilities whereas greater number of models may select poor performing models

  19. Thank You

Recommend


More recommend