return statement approaches - cons and pros [on hold] - javascript

Which return style in your opinion is „better”, more universal in JavaScript? For example, for this operation:
We have:
1st approach
const result = foo.filter(...).map(...).reduce(...);
return result;
2nd approach
return foo.filter(...).map(...).reduce(...);
If it comes to 1st approach I think it’s more readable if we have long code which result is stored in variable. But in 2nd option we don’t have extra variable, so we save memory a bit. Do you know any other cons nas pros?
Which style should I use?


Transforming large immutable messages

Coming from an OOP background, I've got some issues with the concept of immutable objects/records/messages in functional programming.
Lets say I pass an PurchaseOrder record through a pipeline of functions where each function is supposed to add or update data in this record.
When dealing with mutable state, I would simply set some specific properties of the message beeing passed around.
When dealing immutable records, are there some design tricks making things easier on this matter?
Copying every single field in order to change just one field is just a pain.
{ A = x.A ; B = x.B ; C = x.C ; D = x.D ; E = somethingnew; }
I guess grouping data as much as possible is a good way to deal with it, thus avoiding to copy all fields.
Are there any other ways or design guidelines for this?
You can just do
let myRecord3 = { myRecord2 with Y = 100; Z = 2 }
(example from the MSDN records page -
I'm from an extremely pure and extremist OOP background, and my OOP designs tend to be 99% immutable objects (Even in languages which allow mutation). If you have a pipeline of functions where each function is supposed to add or update data in a record, in my experience, each function will deal with a subproblem and subconcept of that record, so you should create a class/type/whatever for each of those to follow OOP best practices like SRP, SoC. If any class/record/type has more than 4 or 5 fields/variables/attributes I think you are probably putting too much responsibility there. If you split the problem into several subproblems, each function of your pipline will create a sub record of the record, and the main function will just combine them all to create the main record. In my experience following traditional OOP drives you to a design that will allow you to achive what you want to achive without any mutation.

What are the pros and cons of replacing a list of conditionals(if/switch/ternary operators) with a predicate list?

Recently I've come across with a school of thought that advocates replacing conditionals(if/switch/ternary operators, same below) with something else(polymorphism, strategy, visitor, etc).
As I'd like to learn by trying new approaches, I've then revised some of my Javascript codes and immediately found a relevant case, which is basically the following list of ifs(which is essentially the same as a switch/nested ternary operators):
function checkResult(input) {
if (isCond1Met(input)) return result1;
if (isCond2Met(input)) return result2;
if (isCond3Met(input)) return result3;
// More conditionals here...
if (isCondNMet(input)) return resultN;
return defaultResult;
Shortly after I've come up with trying a predicate list instead.
Assuming that checkResult always return a String(which applies to my specific case), the above list of ifs can be replaced with a list of predicates(it uses arrow function and find which are ES6+ features though):
var resultConds = {
result1: isCond1Met,
result2: isCond2Met,
result3: isCond3Met,
// More mappings here...
resultN: isCondNMet
var results = Object.keys(resultConds);
function checkResult(input) {
return results.find(result => resultConds[result](input)) || defaultResult;
(On a side note: Whether checkResult should take resultConds and defaultResult as arguments should be a relatively minor issue here, same below)
If the above assumption doesn't hold, the list of predicates can be changed into this instead:
var conds = [
// More predicates here...
var results = [
// More results here...
function checkResult(input) {
return results[conds.findIndex(cond => cond(input))] || defaultResult;
A bigger refactoring maybe this:
var condResults = {
cond1: result1,
cond2: result2,
cond3: result3,
// More mappings here...
condN: resultN,
var conds = Object.keys(condResults);
function checkResult(input) {
return condResults[conds.find(cond => isCondMet[cond](input))] || defaultResult;
I'd like to ask what're the pros and cons(preferably with relevant experience and explanations) of replacing a list of conditionals with a predicate list, at least in such cases(e.g.: input validation check returning a non boolean result based on a list of conditionals)?
For instance, which approach generally leads to probably better:
Testability(like unit testing)
Scalability(when there are more and more conditionals/predicates)
Readability(for those being familiar with both approaches to ensure sufficiently fair comparisons)
Usability/Reusability(avoiding/reducing code duplications)
Flexibility(for example, when the internal lower level logic for validating inputs changes drastically while preserving the same external higher level behavior)
Memory footprint/Time performance(specific to Javascript)
Also, if you think the predicate list approach can be improved, please feel free to demonstrate the pros and cons of that improved approach.
Edit: As #Bergi mentioned that Javascript objects are unordered, the ES6+ Maps may be a better choice :)
In general, putting business logic (like these predicates) in separate objects always improves testability, scalability, readability, maintainability and reusability. This comes from the modularisation inherent to this OOP design, and allows you to keep these predicates in a central store and apply whatever business procedures you want on them, staying independent from your codebase. Essentially, you're treating those conditions as data. You can even choose them dynamically, transform them to your liking, and work with them on an abstract layer.
Readability might suffer when you need to go to certain lengths to replace a simple a short condition with the generic approach, but it pays of well if you have many predicates.
Flexibility for adding/changing/removing predicates improves a lot, however the flexibility to choose a different architecture (how to apply which kind of predicates) will worsen - you cannot just change the code in one small location, you need to touch every location that uses the predicates.
Memory and performance footprints will be bigger as well, but not enough to matter.
Regarding scalability, it only works when you choose a good architecture. A list of rules that needs to be applied in a linear fashion might not do any more at a certain size.
I am very much a noob at this, and regarding readabillity and usabillity I prefer the latter but it's a personal preference and I'm usually backwards so you shouldn't listen to me.
The conditional approach is more portable, I mean easily ported to non-object oriented languages. But that may never happen, and even if it does the people doing that kinds of things probably has tools and experience so it shouldn't be an issue. Regarding performance I doubt there will be any significant differences, first one looks like branching and second kind of like random access.
But as I said, you shouldn't listen to me as I'm just guessing

Conditional data structure (object) access

I have a set of JavaScript functions that handle certain objects. All these objects have the following flexibility:
Fields can be accessed like this: data[prop][sub-prop][etc.], OR
Like this (with a type sub-structure): data[TYPE][prop][sub-prop][etc.].
The object is accessed in many places, and the condition (let's call it is_mixed) is relevant everywhere.
I thought of the following alternatives:
Always access data like this: (is_mixed ? data[TYPE] : data)[prop][sub-prop][etc.]
Have a function called getData and always access data like this: getData()[prop][sub-prop][etc.].
The function code would be:
function getData() { return is_mixed ? data[TYPE] : data; }
Run the following on every new input: if (is_mixed) { data = data[TYPE]; }
It seems to me that options 2 and 3 might be copying the object data (which might be big) and performance is important here (I didn't find the literature to support this guess), but option 1 will make the code big and ugly.
Is there a better option? What's the best way to acheive this in terms of performance, code quality and basically best practices?
It seems to me that options 2 and 3 might be copying the JSON content
No, they won't. They both just copy an object reference, which is quick and cheap (like copying a boolean). #2 is of course slightly slower, since it's a function call, but if it's used a lot, any decent JavaScript engine will inline the function anyway, giving you the benefit of modularity at the source level. (It can take thousands of calls to the function in a shortish period of time to make that kick in, though; e.g., a modern engine only bothers with optimization when it looks likely to matter.)

Memoization in Javascript

I was recently looking into a few javascript design patterns and came across memoization while it looks like a good solution to avoid recalculation of values i can see something wrong with it. say for example this simple code,
function square(num)
var result;
console.log("calculating a fresh value...");
result = num*num;
square.cache[num] = result;
return square.cache[num];
calling console.log(square(20)) //prints out "calculating a fresh value..." and then the result of the calculation 400,
My real question is what happens when the cache grows so large after subsequent calculations that it takes more time to retrieve a result from the cache than it takes to calculate a fresh value. is there any solution towards this?
My real question is what happens when the cache grows so large
This is where you would implement a sort of Garbage Collection. Items could be removed from the cache following a Cache Algorithm.
So for example following Least Recently Used you would record how many times a specific object was used when it was last accessed and remove those from the cache that were not used recently.
Soundcloud use an object store and a very interesting read is this article on how they built their webapp.

Tasks in javascript?

Essence of the question
The real reason why I ask this question - not because I want solve my problem. I want to know how to work with tasks in JavaScript. I don't need thread paralleling and other stuff. There are two parts of computing smth: IO and CPU. I want to make CPU computing works in time between ajax request sended and ajax request get answer from server. There is obstacle: from one function I run many tasks and this function must produce Task, that waits all runned tasks, process results of them and returns some value. That's all I want. Of course, if you post another way to solve my problem, I will vote for your answer and can set it as solution if there are no other answers about tasks.
Why I describe my problem, not just asking about tasks? Ask guys who minused and closed this question a time ago.
My problem: I want to traverse a tree in JavaScript to find the smallest possible parsing. I have a dictionary of words stored in the form of a trie. When a user gives an input string, I need to get a count of words that match the input string and is the shortest combination of words.
My dictionary contains these words: my, code, js, myj, scode
A user types myjscode
I traverse my tree of words and find that the input matches myj + scode and my + js + code
Since the first parsing is the shortest, my function returns 2 (the number of words in the shortest parsing)
My Problem
My dictionary tree is huge, so I can't load it fully. To fix this, I want to do some lazy-loading. Each node of the tree is either loaded and points to child nodes or is not loaded yet and contains a link to the data to be loaded.
So, I need to make node look up calls while I'm traversing the tree. Since these calls are asynchronous, I want to be able to explore other traversals while loading tree nodes. This will improve the response time for the user.
How I want to solve this problem:
My lookup function will return a task. I can call that task and get its results. Once I traverse to the loaded node, I can then make multiple calls to load child nodes and each call returns a task. Since these "tasks" are individual bits of functionality, I can queue them up and execute them while I'm waiting for ajax calls to return.
So, I want to know which library I can use, or how I can emulate tasks in javascript (I'm thinking of tasks as they exist in C#).
There is restriction: no server-side code, only ajax to precompiled dictionaries in javascript. Why? It has to be used as password complexity checker.
You say in your question:
Of course, if you post another way to solve my problem, I will vote for your answer and can set it as solution if there are no other answers about tasks.
Good; sorry, but I don't think that c# style tasks is the right solution here.
I'll accept (although I don't think it's correct) your assertion that for security reasons you have to do everything client-side. As an aside, might I point out that if you are scared of somebody snooping (because you have a security weakness) then passing lots of requests for part of the password is just as insecure as passing one request? Sorry, I appear to have done so without consent!
Nonetheless, I will attempt to answer with a broad outline how I would approach your problem if, indeed, you had to do it in JavaScript; I would use promises. Probably jQuery's Deferred implementation, to be specific. I'll give a very rough pseudo-code outline here.
You start with a nicely structured Trie. Using recursion I would build up a nicely structured "solution tree", which would be a nested array of arrays; this would give the flexibility of being able to respond to the user with a specific message... however, since you seem prepared to lose that bonus and only want a single digit as a solution, I will outline a slightly simpler approach that you could, if needed, adapt to return arrays of the form (from your example):
I mention this structure here also, partly, as it helps explain the approach I am adopting.
I will refer to "nodes" and "valueNodes" in your Trie. I consider "nodes" to be anything and "valueNodes" to be nodes with values.
The recursive promiseToResolveRemainder will resolve 0 for "couldn't do it"; it will only reject the promise if something went wrong (say, the webservice wasn't available).
Dodgy, hacky, untested Pseudo-code
var minDepth=0; //Zero value represents failure to match (Impossible? Not if you are accepting unicode passwords!)
function promiseToResolveRemainder(remainder,fragmentSoFar){
deferred = new jQuery.Deferred();
nextChar = remainder.substring(0,1);
if (remainder.length==1){
//Insert code here to:
//Test for fragmentSoFar+nextChar being a valueNode.
//If so, resolve(1)... otherwise resolve(0)
//!!Note that, subtly, this catches the case where fragmentSoFar is an empty string :)
remainder = remainder.substring(1);
//We know that we *could* terminate the growing fragment here and proceed
//But we could also proceed from here by adding to the fragment
var firstPathResolvedIn = 0;
var secondPathResolvedIn = 0;
firstPathResolvedIn = resolvedIn + 1;
secondPathResolvedIn = resolvedIn;
if(!firstPathResolvedIn==0 and !secondPathResolvedIn==0){
deferred.resolve(Math.max(firstPathResolvedIn,secondPathResolvedIn));//Sloppy, but promises cannot be resolved twice, so no sweat (I know, that's a *dirty* trick!)
//We know that we *need* at least a node or this call to
//promiseToResolveRemainder at this iteration has been a failure.
//ok, so we *could* proceed from here by adding to the fragment
//ooops! We've hit a dead end, we can't complete from here.
return deferred.Promise();
I am not particularly proud of this untested kludgy attempt at code (and I'm not about to write your solution for you!), but I am proud of the approach and am sure that it will yield a robust, reliable and efficient solution to your problem. Unfortunately, you seem to be dependent on a lot of webService calls... I would be very tempted, therefore, to abstract away any calls to the webService and check them through a local cache first.
Not sure this is what you are looking for, but you might try WebWorkers.
A simple example is at: but you can google for more.
Note - web workers will be highly browser dependent and do not run in a separate task on the machine.