Updates from January, 2016 Toggle Comment Threads | Keyboard Shortcuts

  • jkabtech 12:48 am on January 16, 2016 Permalink | Reply
    Tags: antibiotic, Bacillus, cereus, , Lifestyle, resist, ,   

    Lifestyle switching: Bacillus cereus is able to resist certain antibiotic therapies 

    The bacterium B. cereus had so far been considered to be exclusively endospore-forming. In response to harsh conditions, the bacteria form protective endospores enabling them to remain dormant for extended periods. When conditions are more favourable, the endospores reactivate to become fully functioning bacteria.

    Elrike Frenzel, Markus Kranzler and Monika Ehling-Schulz of the Institute of Microbiology at the University of Veterinary Medicine Vienna have now shown for the first time that B. cereus has an alternative lifestyle in the form of so called small colony variants (SCVs). In B. cereus these SCVs form in response to exposure with aminoglycoside antibiotics. SCVs grow slower than the original form of B. cereus. They have an altered metabolism and are resistant to those antibiotics which triggered this state, namely aminoglycosides.

    “The bacterium protects itself against the harmful effects of the antibiotics by forming these Small Colony Variants. But B. cereus is usually treated with exactly those antibiotics which induce the SCV state. If an antibiotic triggers the formation of SCVs, it also triggers resistance,” first author Frenzel explains.

    Rethinking therapy and diagnostics

    The mechanism discovered by Frenzel, Kranzler and Ehling-Schulz is of enormous significance in clinical practice. Traditional diagnostic methods are based on the identification of metabolic features of B. cereus. These tests will not detect SCVs, however, as they have a slower, altered metabolism. This may result in incorrect antibiotic therapies or even failed diagnoses. Study author Frenzel sees molecular-based diagnostics as the only way to diagnose this form of B. cereus.

    Treating B. cereus infections using only aminoglycoside antibiotics could bear the risk of a prolonged infection. SCVs grow more slowly, but they still produce toxins that are harmful to the body. “In this case, a combination therapy with other antibiotic groups is advisable,” Frenzel recommends.

    New molecular mechanism of SCV formation

    One species of bacteria that has been known for years to be a multiresistant hospital pathogen and which poses a life-threatening risk for immunocompromised individuals in particular is Staphylococcus aureus. Those bacteria also form SCVs, but unlike B. cereus they are capable of reverting to its original state. For B. cereus, the adaptation to a small colony variant appears to be final. “We believe that the SCV formation in B. cereus functions differently than in S. aureus,” says study author Ehling-Schulz.

    Environmental niche to cope with stress

    “The ability to form SCVs appears to be of environmental significance for the bacteria,” Frenzel believes. “This alternative lifestyle allows the bacteria to avoid threatening stress factors such as antibiotic exposure. B. cereus are soil-dwelling, and other microorganism in the soil produce antibiotics. Here, too, the formation of SCVs would be an advantage for the bacteria.”

    Journal Reference:

    Elrike Frenzel, Markus Kranzler, Timo D. Stark, Thomas Hofmann, Monika Ehling-Schulz. The Endospore-Forming PathogenBacillus cereusExploits a Small Colony Variant-Based Diversification Strategy in Response to Aminoglycoside Exposure. mBio, 2015; 6 (6): e01172-15 DOI: 10.1128/mBio.01172-15

    View the original article here

    Advertisements
     
  • jkabtech 8:58 pm on January 15, 2016 Permalink | Reply
    Tags: GroundWall, , , Transition, VertiGo, WallClimbing   

    VertiGo – A Wall-Climbing Robot including Ground-Wall Transition 

    Paul Beardsley (Disney Research Zurich)
    Prof Dr Roland Siegwart (ETH Zurich)
    Michael Arigoni (ETH Zurich)
    Michael Bischoff (ETH Zurich)
    Silvan Fuhrer (ETH Zurich)
    David Krummenacher (ETH Zurich)
    Dario Mammolo (ETH Zurich)
    Robert Simpson (ETH Zurich)

    December 29, 2015

    VertiGo - A Wall-Climbing Robot including Ground-Wall Transitionn-Image

    VertiGo is a wall-climbing robot that is capable of transitioning from the ground to the wall, created in collaboration between Disney Research Zurich and ETH. The robot has two tiltable propellers that provide thrust onto the wall, and four wheels. One pair of wheels is steerable, and each propeller has two degrees of freedom for adjusting the direction of thrust. By transitioning from the ground to a wall and back again, VertiGo extends the ability of robots to travel through urban and indoor environments. The robot is able to move on a wall quickly and with agility. The use of propellers to provide thrust onto the wall ensures that the robot is able to traverse over indentations such as masonry. The choice of two propellers rather than one enables a floor-to-wall transition – thrust is applied both towards the wall using the rear propeller, and in an upward direction using the front propeller, resulting in a flip onto the wall.

    VertiGo – A Wall-Climbing Robot including Ground-Wall Transition-Thumbnail

    Download File “VertiGo – A Wall-Climbing Robot including Ground-Wall Transition-Paper”
    [pdf, 1.27 MB]

    The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author’s copyright. These works may not be reposted without the explicit permission of the copyright holder.

    View the original article here

     
  • jkabtech 5:53 pm on January 15, 2016 Permalink | Reply
    Tags: commonly, implementation, statistical,   

    Most commonly used statistical tests and implementation in R 

    This chapter explains the purpose of some of the most commonly used statistical tests and how to implement them in R

    It is a parametric test used to test if the mean of a sample from a normal distribution could reasonably be a specific value.

    set.seed(100)x One Sample t-test#=> #=> data: x#=> t = 0.70372, df = 49, p-value = 0.4849#=> alternative hypothesis: true mean is not equal to 10#=> 95 percent confidence interval:#=> 9.924374 10.157135#=> sample estimates:#=> mean of x #=> 10.04075

    In above case, the p-Value is not less than significance level of 0.05, therefore the null hypothesis that the mean=10 cannot be rejected. Also note that the 95% confidence interval range includes the value 10 within its range. So, it is ok to say the mean of ‘x’ is 10, especially since ‘x’ is assumed to be normally distributed. In case, a normal distribution is not assumed, use wilcoxon signed rank test shown in next section.

    Note: Use conf.level argument to adjust the confidence level.

    To test the mean of a sample when normal distribution is not assumed. Wilcoxon signed rank test can be an alternative to t-Test, especially when the data sample is not assumed to follow a normal distribution. It is a non-parametric method used to test if an estimate is different from its true value.

    numeric_vector Wilcoxon signed rank test with continuity correction#>#> data: numeric_vector#> V = 30, p-value = 0.1056#> alternative hypothesis: true location is not equal to 20#> 90 percent confidence interval:#> 19.00006 25.99999#> sample estimates:#> (pseudo)median #> 23.00002

    If p-Value < 0.05, reject the null hypothesis and accept the alternate mentioned in your R code’s output. Type example(wilcox.test) in R console for more illustration.

    Both t.Test and Wilcoxon rank test can be used to compare the mean of 2 samples. The difference is t-Test assumes the samples being tests is drawn from a normal distribution, while, Wilcoxon’s rank sum test does not.

    Pass the two numeric vector samples into the t.test() when sample is distributed ‘normal’y and wilcox.test() when it isn’t assumed to follow a normal distribution.

    x <- c(0.80, 0.83, 1.89, 1.04, 1.45, 1.38, 1.91, 1.64, 0.73, 1.46)y Wilcoxon rank sum test#=> #=> data: x and y#=> W = 35, p-value = 0.1272#=> alternative hypothesis: true location shift is greater than 0

    With a p-Value of 0.1262, we cannot reject the null hypothesis that both x and y have same means.

    t.test(1:10, y = c(7:20)) # P = .00001855#=> Welch Two Sample t-test#=> #=> data: 1:10 and c(7:20)#=> t = -5.4349, df = 21.982, p-value = 1.855e-05#=> alternative hypothesis: true difference in means is not equal to 0#=> 95 percent confidence interval:#=> -11.052802 -4.947198#=> sample estimates:#=> mean of x mean of y #=> 5.5 13.5

    With p-Value < 0.05, we can safely reject the null hypothesis that there is no difference in mean.

    # Use paired = TRUE for 1-to-1 comparison of observations.t.test(x, y, paired = TRUE) # when observations are paired, use ‘paired’ argument.wilcox.test(x, y, paired = TRUE) # both x and y are assumed to have similar shapes

    Conventionally, If the p-Value is less than significance level (ideally 0.05), reject the null hypothesis that both means are the are equal.

    To test if a sample follows a normal distribution.

    shapiro.test(numericVector) # Does myVec follow a normal disbn?

    Lets see how to do the test on a sample from a normal distribution.

    # Example: Test a normal distributionset.seed(100)normaly_disb Shapiro-Wilk normality test#=>#=> data: normaly_disb#=> W = 0.98836, p-value = 0.535

    The null hypothesis here is that the sample being tested is normally distributed. Since the p Value is not less that the significane level of 0.05, we don’t reject the null hypothesis. Therefore, the tested sample is confirmed to follow a normal distribution (thou, we already know that!).

    # Example: Test a uniform distributionset.seed(100)not_normaly_disb Shapiro-Wilk normality test#=> data: not_normaly_disb#=> W = 0.96509, p-value = 0.009436

    If p-Value is less than the significance level of 0.05, the null-hypothesis that it is normally distributed can be rejected, which is the case here.

    Kolmogorov-Smirnov test is used to check whether 2 samples follow the same distribution.

    ks.test(x, y) # x and y are two numeric vector# From different distributionsx <- rnorm(50)y Two-sample Kolmogorov-Smirnov test#=> #=> data: x and y#=> D = 0.58, p-value = 4.048e-08#=> alternative hypothesis: two-sided# Both from normal distributionx <- rnorm(50)y Two-sample Kolmogorov-Smirnov test#=> #=> data: x and y#=> D = 0.18, p-value = .3959#=> alternative hypothesis: two-sided

    If p-Value < 0.05 (significance level), we reject the null hypothesis that they are drawn from same distribution. In other words, p < 0.05 implies x and y from different distributions

    Fisher’s F test can be used to check if two samples have same variance.

    var.test(x, y) # Do x and y have the same variance?

    Alternatively fligner.test() and bartlett.test() can be used for the same purpose.

    Chi-squared test in R can be used to test if two categorical variables are dependent, by means of a contingency table.

    Example use case: You may want to figure out if big budget films become box-office hits. We got 2 categorical variables (Budget of film, Success Status) each with 2 factors (Big/Low budget and Hit/Flop), which forms a 2 x 2 matrix.

    chisq.test(table(categorical_X, categorical_Y), correct = FALSE) # Yates continuity correction not applied#orsummary(table(categorical_X, categorical_Y)) # performs a chi-squared test.# Sample results#=> Pearson’s Chi-squared test#=> data: M#=> X-squared = 30.0701, df = 2, p-value = 2.954e-07

    There are two ways to tell if they are independent:

    By looking at the p-Value: If the p-Value is less that 0.05, we fail to reject the null hypothesis that the x and y are independent. So for the example output above, (p-Value=2.954e-07), we reject the null hypothesis and conclude that x and y are not independent.

    From Chi.sq value: For 2 x 2 contingency tables with 2 degrees of freedom (d.o.f), if the Chi-Squared calculated is greater than 3.841 (critical value), we reject the null hypothesis that the variables are independent. To find the critical value of larger d.o.f contingency tables, use qchisq(0.95, n-1), where n is the number of variables.

    To test the linear relationship of two continuous variables

    The cor.test() function computes the correlation between two continuous variables and test if the y is dependent on the x. The null hypothesis is that the true correlation between x and y is zero.

    cor.test(x, y) # where x and y are numeric vectors.cor.test(cars$speed, cars$dist)#=> Pearson’s product-moment correlation#=> #=> data: cars$speed and cars$dist#=> t = 9.464, df = 48, p-value = 1.49e-12#=> alternative hypothesis: true correlation is not equal to 0#=> 95 percent confidence interval:#=> 0.6816422 0.8862036#=> sample estimates:#=> cor #=> 0.8068949

    If the p Value is less than 0.05, we reject the null hypothesis that the true correlation is zero (i.e. they are independent). So in this case, we reject the null hypothesis and conclude that dist is dependent on speed.

    fisher.test(contingencyMatrix, alternative = “greater”) # Fisher’s exact test to test independence of rows and columns in contingency tablefriedman.test() # Friedman’s rank sum non-parametric test

    There are more useful tests available in various other packages.

    The package lawstat has a good collection. The outliers package has a number of test for testing for presence of outliers.

    View the original article here

     
  • jkabtech 1:28 pm on January 15, 2016 Permalink | Reply
    Tags: , ,   

    People can read their manager’s mind 

    December 31st, 2015 | wetware

    The fish rots from the head down.

    – A beaten saying

    People generally don’t do what they’re told, but what they expect to be rewarded for. Managers often say they’ll reward something – perhaps they even believe it. But then they proceed to reward different things.

    I think people are fairly good at predicting this discrepancy. The more productive they are, the better they tend to be at predicting it. Consequently, management’s stated goals will tend to go unfulfilled whenever deep down, management doesn’t value the sort of work that goes into achieving these goals.

    So not only is paying lip service to these goals worthless, but so is lying to oneself and genuinely convincing oneself. When time comes to reward people, it is the gut feeling of whose work is truly remarkable that matters. And what you usually convince yourself of is that the goal is important – but not that achieving it is remarkable. In fact, often someone pursuing what you think are unimportant goals in a way that you admire will impress you more than someone doing “important grunt work” (in your eyes.)

    You then live happily with this compartmentalization – an important goal to be achieved by unremarkable people. However, nobody is fooled except you. The people whose compensation depends on your opinion have ample time to remember and analyze your past words and decisions – more time than you, in fact, and a stronger incentive. And so their mental model of you is often much better than your own. So they ignore your requests and become valued, instead of following them and sinking into obscurity.

    Examples:

    A manager truly appreciates original mathematical ideas. The manager requests to rid the code of crash-causing bugs, because customers resent crashes. The most confident people ignore him and spend time coming up with original math. The less confident people spend time chasing bugs, are upset by the lack of recognition, and eventually leave for greener pastures. At any given moment, the code base is ridden by crash-causing bugs.A manager enjoys “software architecture”, design patterns, and language lawyer type of knowledge. The manager requests to cooperate better with neighboring teams who are upset by missing functionality in the beautifully architected software. People will tend to keep designing more patterns into the program.A highly influential figure enjoys hacking on their machine. The influential figure points out the importance of solid, highly-available infrastructure to support development. The department responsible for said infrastructure will guarantee that he gets as much bandwidth, RAM, screen pixels and other goodies as they can supply, knowing that the infrastructure he really cares about is that which enables the happy hacking on his machine. The rest of the org might well remain stuck with a turd of an infrastructure.A manager loathes spending money. The manager requires to build highly-available infrastructure to support development. People responsible for infrastructure will build a piece of shit out of yesteryear’s scraps purchased at nearby failing companies for peanuts, knowing that they’ll be rewarded.A manager is all about timely delivery, and he did very little code maintenance in his life. The manager nominally realizes that a lot of code is used in multiple shipping products; that it takes some time to make a change compatible with all the client code; and that branching the entire code base is a quick way to do the work for this delivery, but you’ll pay for the shortcut many times over in each of your future deliveries. People will fork the code base for every shipping product. (I’ve seen it and heard about it more times than the luckier readers would believe.)

    And so it goes. If something is rotten in an org, the root cause is a manager who doesn’t value the work needed to fix it. They might value it being fixed, but of course no sane employee gives a shit about that. A sane employee cares whether they are valued. Three corollaries follow:

    Corollary 1. Who can, and sometimes does, un-rot the fish from the bottom? An insane employee. Someone who finds the forks, crashes, etc. a personal offense, and will repeatedly risk annoying management by fighting to stop these things. Especially someone who spends their own political capital, hard earned doing things management truly values, on doing work they don’t truly value – such a person can keep fighting for a long time. Some people manage to make a career out of it by persisting until management truly changes their mind and rewards them. Whatever the odds of that, the average person cannot comprehend the motivation of someone attempting such a feat.

    Corollary 2. When does the fish un-rot from the top? When a manager is taught by experience that (1) neglecting this thing is harmful and (2) it’s actually hard to get it right (that is, the manager himself, or someone he considers smart, tried and failed.) But that takes managers admitting mistakes and learning from them. Such managers exist; to be called one of them would exceed my dreams.

    Corollary 3. Managers who can’t make themselves value all important work should at least realize this: their goals do not automatically become their employees’ goals. On the contrary, much or most of a manager’s job is to align these goals – and if it were that easy, perhaps they wouldn’t pay managers that much, now would they? I find it a blessing to be able to tell a manager, “you don’t really value this work so it won’t get done.” In fact, it’s a blessing even if they ignore me. That they can hear this sort of thing without exploding means they can be reasoned with. To be considered such a manger is the apex of my ambitions.

    Finally, don’t expect people to enlighten you and tell you what your blind spots are. Becoming a manager means losing the privilege of being told what’s what. It’s a trap to think of oneself as just the same reasonable guy and why wouldn’t they want to talk to me. The right question is, why would they? Is the risk worth it for them? Only if they take your org’s problem very personally, which most people quite sensibly don’t. Someone telling me what’s what is a thing to thank for, but not to count on.

    The safe assumption is, they read your mind like an open book, and perhaps they read it out loud to each other – but not to you. The only way to deal with the problems I cause is an honest journey into the depths of my own rotten mind.

    P.S. As it often happens, I wanted to write this for years (the working title was “people know their true KPIs”), but I didn’t. I was prompted to finally write it by reading Dan Luu’s excellent “How Completely Messed Up Practices Become Normal”, where he says, among other things, “It’s sort of funny that this ends up being a problem about incentives. As an industry, we spend a lot of time thinking about how to incentivize consumers into doing what we want. But then we set up incentive systems that are generally agreed upon as incentivizing us to do the wrong things…” I guess this is my take on the incentives issue – real incentives vs stated incentives; I believe people often break rules put in place to achieve a stated goal in order to do the kind of work that is truly valued (even regardless of whether that work’s goal is valued.) It’s funny how I effectively comment on Dan’s blog two times in a row, his blog having become easily my favorite “tech blog”, while my own is kinda fading away as I spend my free time learning to animate.

    View the original article here

     
  • jkabtech 8:50 am on January 15, 2016 Permalink | Reply
    Tags: , , Guesstimate, , Spreadsheet,   

    Introducing Guesstimate, a Spreadsheet for Things That Aren’t Certain 

    Existing spreadsheet software is made for analyzing tables of data. Excel, Google Sheets, and similar tools are fantastic for doing statistics on things that are well known.

    Unfortunately many important things are not known. I don’t yet know if I will succeed as an entrepreneur, when I will die, exactly how bad sugar is for me. No one really knows what the US GDP will be if Donald Trump gets elected, or if the US can ‘win’ if we step up our fight in Syria. But we can make estimates, and we can use tools to become as accurate as possible.

    Estimates for these things should feature ranges, not exact numbers. There should be lower and upper bounds.

    The first reaction of many people to uncertain math is to use the same techniques as for certain math. They would either imagine each unknown as an exact mean, or take ‘worst case’ and ‘best case’ scenarios and multiply each one. These two approaches are quite incorrect and produce oversimplified outputs.

    This is why I’ve made Guesstimate, a spreadsheet that’s as easy to use as existing spreadsheets, but works for uncertain values. For any cell you can enter confidence intervals (lower and upper bounds) that can represent full probability distributions. 5000 Monte Carlo simulations are performed to find the output interval for each equation, all in the browser.

    At the end of this you don’t just understand the ‘best’ and ‘worst’ scenarios, but you also get everything in between and outside. There’s the mean, the median, and several of the percentiles. In the future I intend to add sensitivity analyses and the value of information calculations.

    Guesstimate is free and open source. I encourage you to try it out. Make estimates of things you find important or interesting. I’m looking forward to seeing what people come up with.

    View the original article here

     
  • jkabtech 4:18 am on January 15, 2016 Permalink | Reply
    Tags: Adonisjs, Laravel, Nodejs,   

    Adonis.js v2 released – Laravel for Node.js 

    Adonis is a true MVC framework for Nodejs with basics done right. It borrows the concept the of Service providers from popular PHP framework Laravel to write scalable applications and also leverage the power of ES6 to make your code expressive and maintainable.

    Route.get(‘/’, ‘HomeController.index’)// Define controllerclass HomeController { * index (request, response) { response.send(‘Whufff, i am using adonis’) }}

    View the original article here

     
  • jkabtech 1:17 am on January 15, 2016 Permalink | Reply
    Tags: library, , Racket, , untyped   

    Making a dual typed / untyped Racket library 

    Chuck Close is a terrific painter. Primarily a portraitist, he’s probably best known for his technique of dividing photos into grids and painting them, one cell at a time. As you can see, even though the overall image is easy to discern, the brushwork and colors in each cell follow a separate logic.

    What I like most about Close’s grid paintings is that he never resolves their inherent tensions—between macro and micro, order and disorder, surface and depth, depiction and abstraction. Instead, he keeps these massive works—they’re 8–10 feet tall—carefully balanced in between. When I stand in front of one of these paintings, I feel like I’m in two places at once. Nice trick.

    It’s also the kind of trick that Racket is good at. Recently I was wondering how to upgrade my sugar utility library for Racket so that it would work with Typed Racket, but without changing how it works in regular Racket.

    Typed Racket is a dialect of Racket that adds static typing to the otherwise untyped Racket language. You can explicitly add types to variables and functions using type annotations. But Typed Racket also has a crafty type-inference system that deduces most of the others. Cleverly, Typed Racket doesn’t need a separate compiler. Once it completes its typechecking, you’re left with untyped Racket code, which is then compiled as usual.

    Typed languages can be faster, because the compiler can eliminate certain checks that would otherwise have to be done when the program runs. This is the main reason I originally investigated Typed Racket. For instance, if you say foo = foo + bar in Python, your intention is to update the value of foo to be the sum of bar and the current value of foo. But even if you don’t care about types, Python still does: before it tries to add these two, it has to check that they both hold numeric values (if they don’t, you’ll get a TypeError). Whereas in a typed program, one could declare them both to be Numbers. The program can rely on that promise (and skip checking them for number-ness later).

    Note that this is another way of saying that every programming language is a statically typed language. The question is who does the typing (either you, or the program interpreter) and when (either before the program is running, or during). What we call an “untyped” language is better thought of as an expensively typed language. For many tasks, the convenience of not specifying types outweighs this cost. This is why untyped languages are popular for many jobs. But in some cases, you need the extra performance.

    The other benefit of a typed language is safety. Type checking is a way of making and verifying claims about the data and functions in your program in a disciplined manner. In that sense, Typed Racket has a lot in common with Racket’s contract system. But because Typed Racket is verifying the types before the program runs, it can catch subtler errors. In practice, it takes more effort to get a program running in Typed Racket, but once it does, you can be confident that its internal logic is sound. Your effort is repaid later when you spend less time chasing down bugs.

    The unavoidable wrinkle in a mixed typed / untyped system is the interaction between typed and untyped code. Most Racket libraries are written with untyped code, and Typed Racket—now shortening this to TR—has to use these libraries. TR’s job is to insure that your functions and data are what they say they are. So you can’t just toss untyped code into the mix—“don’t worry TR, this will work.” TR likes you, but it doesn’t trust you.

    Instead, TR offers a function called require/typed. Like the standard require function that imports a library into a program, require/typed lets you specify the types that TR should apply to the untyped code.

    This works well enough, but it has a cost: in this case, TR has to perform its typechecking when the program runs, and it does so by converting these types into Racket contracts. The added cost of a contract isn’t a big deal if you use the imported function occasionally.

    But if you use the function a lot, the contract can be expensive. My sugar library is a collection of little utility and helper functions that get called frequently during a program. When I use them with require/typed, in many cases the contract that gets wrapped around the function takes longer than the function itself.

    What’s the solution? One option would be to convert sugar to be a native TR library. That’s fine, but this conversion can impose limitations on the library that aren’t necessary or desirable for untyped use. For instance, sometimes you need to narrow the interface of an untyped function to make it typable.

    Another option would be to create a new version of sugar that’s typed, and make it available alongside untyped sugar. But this means maintaining two sets of code in parallel. Bad for all the usual reasons.

    Instead, I wanted to make sugar available as both a natively typed and untyped library, while only maintaining one codebase.

    Typed code naturally has more information in it than untyped code (namely, the type annotations). So my intuition was to convert sugar to TR and then generate an untyped version from this code by ignoring the type annotations.

    TR makes this easy by offering its no-check dialects. You can write code under #lang typed/racket and then, if you want the typing to be ignored, change that line to #lang typed/racket/no-check, and the program will behave like untyped code.

    #lang typed/racket(: gt : Integer Integer -> Boolean)(define (gt x y) (> x y))(gt 5.0 4.0) ; typecheck error: Floats are not Integers#lang typed/racket/no-check(: gt : Integer Integer -> Boolean)(define (gt x y) (> x y))(gt 5.0 4.0) ; works because Integer type is ignored

    This is cool, right? Untyped languages are usually built on top of typed languages, not the other way around. (For instance, the reference implementation of Python, an untyped language, is written in C, a typed language.) By making types an option rather than a requirement, Typed Racket creates new possibilities for how you can use types (what is sometimes called gradual typing.)

    Compiling a chunk of source code at two locations isn’t hard. Racket already has an include function that lets you pull in source code from another file. Our first intuition might be to set up three files, like so:

    (provide gt)(: gt : Integer Integer -> Boolean)(define (gt x y) (> x y))#lang typed/racket(include “gt.rkt”)#lang typed/racket/no-check(include “gt.rkt”)

    This works, but it’s not very ergonomic: “gt.rkt” has no #lang line, so we can’t run it directly in DrRacket, which makes editing and testing the file more difficult.

    To get around this, I wrote a new function called include-without-lang-line that behaves the same way as include, but strips out the #lang line it finds in the included file. That allows us to consolidate the files:

    #lang typed/racket(provide gt)(: gt : Integer Integer -> Boolean)(define (gt x y) (> x y))#lang typed/racket/no-check(require sugar/include)(include-without-lang-line “typed-gt.rkt”)

    Suppose we also want the option to add untyped code to the untyped parts of the library. So rather than using #lang typed/racket/no-check directly, we can move this code into a submodule.

    #lang typed/racket(provide gt)(: gt : Integer Integer -> Boolean)(define (gt x y) (> x y))#lang racket(provide gt)(module typed-code typed/racket/no-check (require sugar/include) (include-without-lang-line “typed-gt.rkt”))(require ‘typed-code)

    This way, we can (require “typed-gt.rkt”) from TR code and get the typed version of the gt function, or (require “untyped-gt.rkt”) from untyped Racket code and get the less strict untyped version. But the body of the function only exists in one place.

    Testing in Racket is usually handled with the rackunit library. For most test cases in sugar, the behavior of the typed and untyped code should be identical. Thus, I didn’t want to maintain two largely identical test files. I wanted to write a list of tests and run them in both typed and untyped mode.

    Moreover, unlike the library itself, which is set up for the convenience of others, the tests could be set up for the convenience of me. So my goal was to make everything happen within one file. That meant my include-without-lang-line gimmick wouldn’t be useful here.

    This time, submodules were the solution. Suppose we have a simple rackunit check.

    It’s clear how we can run this test in typed and untyped contexts using two test files:

    #lang racket(require rackunit “untyped-gt.rkt”)(check-true (gt 42 41))#lang typed/racket(require typed/rackunit “typed-gt.rkt”)(check-true (gt 42 41))

    Then we can combine them into a single file with submodules:

    1 2 3 4 5 6 7 8 91011#lang racket(module untyped-test racket (require rackunit “untyped-gt.rkt”) (check-true (gt 42 41)))(require ‘untyped-test)(module typed-test typed/racket (require typed/rackunit “typed-gt.rkt”) (check-true (gt 42 41))(require ‘typed-test)

    The final maneuver is to make a macro that will take our list of tests and put them into this two-submodule form. Here’s a simple way to do it:

    1 2 3 4 5 6 7 8 91011121314151617181920#lang racket(require (for-syntax racket/syntax))(define-syntax (eval-as-typed-and-untyped stx) (syntax-case stx () [(_ exprs …) (with-syntax ([untyped-sym (generate-temporary)] [typed-sym (generate-temporary)]) #'(begin (module untyped-sym racket (require rackunit “untyped-gt.rkt”) exprs …) (require ‘untyped-sym) (module typed-sym typed/racket (require typed/rackunit “typed-gt.rkt”) exprs …) (require ‘typed-sym)))]))(eval-as-typed-and-untyped (check-true (gt 42 41))) ; works

    We need generate-temporary in case we want to invoke the macro multiple times within the file—it insures that each submodule has a distinct, nonconflicting name.

    Aside from the potentially slower performance, one significant shortcoming of require/typed is that it can’t be used with macros. But that’s not a problem here. If we make a macro version of gt, everything still works:

    #lang typed/racket(provide gt gt-macro)(: gt : Integer Integer -> Boolean)(define (gt x y) (> x y))(define-syntax-rule (gt-macro x y) (> x y))#lang racket(provide gt gt-macro)(module typed-code typed/racket/no-check (require sugar/include) (include-without-lang-line “typed-gt.rkt”))(require ‘typed-code)#lang racket(define-syntax (eval-as-typed-and-untyped stx) … ) ; definition same as above(eval-as-typed-and-untyped (check-true (gt 42 41)) ; still works (check-true (gt-macro 42 41))) ; also works

    Using this technique, nothing stops us from adding contracts to the untyped library that correspond to the typed version of the function:

    #lang typed/racket(provide gt)(: gt : Integer Integer -> Boolean)(define (gt x y) (> x y))#lang racket(provide (contract-out [gt (integer? integer? . -> . boolean?)]))(module typed-code typed/racket/no-check (require sugar/include) (include-without-lang-line “typed-gt.rkt”))(require ‘typed-code)#lang racket(require rackunit “untyped-gt.rkt”)(check-true (gt 42 41)) ; works(check-exn exn:fail:contract? (? _ (gt 42.5 41))) ; fails

    From here, it’s a short step to a triple-mode library: we can make a ‘safe submodule in “untyped-gt.rkt” that provides the function with a contract, and otherwise provide it without. Details are left as an exercise to the reader. (Hints are available in the sugar source.)

    — Matthew Butterick | 6 May 2015

    pollen.rkt
    dual-typed-untyped-library.html.pm
    template.html
    styles.css.pp

    View the original article here

     
  • jkabtech 8:44 pm on January 14, 2016 Permalink | Reply
    Tags: Fai0verflow, ,   

    Fai0verflow: Linux on the PS4 [video] 

    Linux su Playstation4 mostrato al 32esimo Chaos Communication Congress

    Sorry, I could not read the content fromt this page.

    View the original article here

     
  • jkabtech 5:09 pm on January 14, 2016 Permalink | Reply
    Tags: Disque   

    Disque 1.0 RC1 is out 

    Sorry, I could not read the content fromt this page.Sorry, I could not read the content fromt this page.

    View the original article here

     
  • jkabtech 8:55 am on January 14, 2016 Permalink | Reply
    Tags: periodic, seventh, table   

    The seventh row of the periodic table is now full 

    < ICSU publications on climate change 30 Dec 2015 23:50 Age: 1 days
    Category: Press Releases
    The fourth IUPAC/IUPAP Joint Working Party (JWP) on the priority of claims to the discovery of new elements has reviewed the relevant literature for elements 113, 115, 117, and 118 and has determined that the claims for discovery of these elements have been fulfilled, in accordance with the criteria for the discovery of elements of the IUPAP/IUPAC Transfermium Working Group (TWG) 1991 discovery criteria. These elements complete the 7th row of the periodic table of the elements, and the discoverers from Japan, Russia and the USA will now be invited to suggest permanent names and symbols. The new elements and assigned priorities of discovery are as follows:

    Element 113 (temporary working name and symbol: ununtrium, Uut)
    The RIKEN collaboration team in Japan have fulfilled the criteria for element Z=113 and will be invited to propose a permanent name and symbol.

    Elements 115, 117, and 118 (temporary working names and symbols: ununpentium, Uup; ununseptium, Uus; and ununoctium, Uuo)
    The collaboration between the Joint Institute for Nuclear Research in Dubna, Russia; Lawrence Livermore National Laboratory, California, USA; and Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA have fulfilled the criteria for element Z=115, 117 and will be invited to propose permanent names and symbols.

    The collaboration between the Joint Institute for Nuclear Research in Dubna, Russia and Lawrence Livermore National Laboratory, California, USA have fulfilled the criteria for element Z=118 and will be invited to propose a permanent name and symbol.

    The priorities for four new chemical elements are being introduced simultaneously, after the careful verification of the discoveries and priorities. The decisions are detailed in two reports by the Joint Working Party (JWP), which includes experts drawn from IUPAC and IUPAP (the International Union of Pure and Applied Physics). These reports will be published in an early 2016 issue of the IUPAC journal Pure and Applied Chemistry (PAC).The JWP has reviewed the relevant literature pertaining to several claims of these new elements. The JWP has determined that the RIKEN collaboration have fulfilled the criteria for the discovery of element with atomic numbers Z=113. Several studies published from 2004 to 2012 have been construed as sufficient to ratify the discovery and priority.

    In the same PAC report, the JWP also concluded that the collaborative work between scientists from the Joint Institute for Nuclear Research in Dubna, Russia; from Lawrence Livermore National Laboratory, California, USA; and from Oak Ridge National Laboratory, Oak Ridge, Tennessee, USA (the Dubna-Livermore-Oak Ridge collaborations), starting in 2010, and subsequently confirmed in 2012 and 2013, have met the criteria for discovery of the elements with atomic numbers Z=115 and Z=117.

    Finally, in a separate PAC article the Dubna?Livermore collaboration started in 2006 is reported as having satisfied the criteria for discovery of element Z=118.

    “A particular difficulty in establishing these new elements is that they decay into hitherto unknown isotopes of slightly lighter elements that also need to be unequivocally identified? commented JWP chair Professor Paul J. Karol, ?but in the future we hope to improve methods that can directly measure the atomic number, Z”.

    “The chemistry community is eager to see its most cherished table finally being completed down to the seventh row. IUPAC has now initiated the process of formalizing names and symbols for these elements temporarily named as ununtrium, (Uut or element 113), ununpentium (Uup, element 115), ununseptium (Uus, element 117), and ununoctium  (Uuo, element 118)” said Professor Jan Reedijk, President of the Inorganic Chemistry Division of IUPAC.

    The proposed names and symbols will be checked by the Inorganic Chemistry Division of IUPAC for consistency, translatability into other languages, possible prior historic use for other cases, etc. New elements can be named after a mythological concept, a mineral, a place or country, a property or a scientist (see: W.H. Koppenol, PAC 74 (2002) 787-791). After Divisional acceptance, the names and two-letter symbols will be presented for public review for five months, before the highest body of IUPAC, the Council, will make a final decision on the names of these new chemical elements and their two-letter symbols and their introduction into the Periodic Table of the Elements.

    “As the global organization that provides objective scientific expertise and develops the essential tools for the application and communication of chemical knowledge for the benefit of humankind, the International Union of Pure and Applied Chemistry is pleased and honored to make this announcement concerning elements 113, 115, 117, and 118 and the completion of the seventh row of the periodic table of the elements,” said IUPAC President Dr. Mark C. Cesa, adding that, “we are excited about these new elements, and we thank the dedicated scientists who discovered them for their painstaking work, as well the members of the IUPAC/IUPAP Joint Working Party for completing their essential and critically important task.”

    For further information, contact Dr. Lynn M. Soby, Executive Director, IUPAC, at secretariat@iupac.org or lsoby@iupac.org.

    > Download pdf version

    View the original article here

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: