Articles — Essay

Articles tagged Essay

Show all Articles

Kernel Optimization: Modifying Multiple Tasks Related Variables
Kernel Optimization: Modifying Multiple Tasks Related Variables
This document talks about how we tried to do a benchmark using the aiostress test on ubuntu desktop 16.04, the things we found useful and the problems we encountered.
Alfonso
Kernel Improvement
Kernel Improvement
In this paper we try to make an improvement for the Linux kernel, by modifying kernel variables.
Daniel Contreras and Itzel Cordero
Operative Systems Midterm Project
Operative Systems Midterm Project
In this paper, we measure the memory performance throughout the Phoronix test "RAMspeed SMP". We decide to test this specific benchmark because we know how important is the memory for the system performance. This document shows how much the memory performance could change if we modify some variables in the linux kernel.
Daniel Jiménez
LaTeX Assignment 4
LaTeX Assignment 4
This was an assignment for a college physics course. Please let me know what you think! :)
Jonathan Guiang
Why it's good that Soulstealer Vayne is so hard to get - a statistical Analysis
Why it's good that Soulstealer Vayne is so hard to get - a statistical Analysis
I analyzed the Hextech-Crafting in a Stochastic Simulation with 10 Million Players and found that you're going to make more Riot Points than you spend (in value) and thats without counting Champs. I've made some assumptions, which can be found down in the paper itself for those interested. For everyone else: If you're trying to maximize your RP-Net-Worth stack up on Hextech Chests.
WORST BRUISER EU
N-gram Frequency Discounts
N-gram Frequency Discounts
A short note on the motivation for n-gram frequency discounts in the context of the Katz backoff algorithm.
Paul Glezen
Multi-Tagging for Transition-based Dependency Parsing
Multi-Tagging for Transition-based Dependency Parsing
This project focuses on a modification of a greedy transition based dependency parser. Typically a Part-Of-Speech (POS) tagger models a probability distribution over all the possible tags for each word in the given sentence and chooses one as its best guess. This is then pass on to the parser which uses this information to build a parse tree. The current state of the art for POS tagging is about 97% word accuracy, which seems high but results in a around 56% sentence accuracy. Small errors at the POS tagging phase can lead to large errors down the NLP pipeline and transition based parsers are particularity sensitive to these types of mistakes. A maximum entropy Markov model was trained as a POS multi-tagger passing more than its 1-best guess to the parser which was thought could make a better decision when committing to a parse for the sentence. This has been shown to give improved accuracy in other parsing approaches. We shown there is a correlation between tagging ambiguity and parsers accuracy and in fact the higher the average tags per word the higher the accuracy.
awhillas

Related Tags

Purdue UniversityHandoutInternational LanguagesMathReferencesCitationsUniversityRésumé / CVFrenchGetting StartedCover LetterPoemSpanishProject / Lab ReportThesisTwo-columnUniversity of BirminghamHumanitiesModern Language Association (MLA)ChicagoTurabian