- I decided to move the tests into the src/app folder so that they are close to the source.
- It’s quite easy to see this coming because the file nature will always remain the same but features and data domains grow in real world applications .
- -app/ – data/ – artifacts/ ArtifactReducer.js ArtifactActions.js ArtifactSelectors.js – calls/ CallsReducer.js CallsActions.js CallsSelectors.js – filters/ …
- ./data/calls/CallReducer.js’; import * as CallActions from ‘.
- A better way is to encapsulate the data domain by creating an index.js that exports the internal files of this module.
Recently I was rewriting in Redux/React the web application of Flow, a tool to help developers better understand the structure and the…
@ReactNext: A suggestion on how to structure #Redux / #ReactJS applications
Recently I was rewriting in Redux/React the web application of Flow, a tool to help developers better understand the structure and the behavior of their applications. It provides an interactive web interface to visualize the execution flow of Java programs. And I was confronting the problem of how to structure my project.
Organize by file nature or by feature / data domain?
In most samples and tutorials of Redux/React projects including the official ones, the common file structure is organized by file natures: actions, reducers, selectors, (presentational) components and containers.
So I started with this structure and I soon found the limit of this approach: it does not scale. It’s quite easy to see this coming because the file nature will always remain the same but features and data domains grow in real world applications. In consequence, we will get more and more files in each of these folders. Moreover, when I started a new feature I was wasting much time scrolling and navigating in my project to find all the files related to the same feature.
Then I tried to group the files by features. But I found some new problems. In the application that I’m working on, we collect data from the execution of a Java application, such as the call stacks, the running threads, the artifacts (packages, classes, methods) and so on. And we build different visualizations/views upon these data in order to help developers better understand their program. For example, we build a call graph that represents the artifacts and the relationships between them. We also provide a flame chart that shows all the method calls during the execution. You can search, filter and select artifacts and calls among all the visualizations.
You can go to the live demo here to see how these visualizations look like, or checkout this post to see how I use the visualizations to understand Junit runners.
I realize that on one hand, a feature/view often involves multiple data domains. For example, the rendering of the flame chart will need the threads, the calls and eventually the filters.
On the other hand, a data domain can be shared by multiple features and it does not necessarily become a feature or a view itself. Both call graph and flame chart need the filter data. To me, reducers, actions and selectors control the data and business logic whereas containers and components construct the views.
Finally I ended up with the following structure by extracting containers and components from the feature folders:
FlameChartContainer.jsx // depends on calls, threads, filters
I think the main advantages of this structure are:
With ES6 module system, each file is a module and we need to export functions and variables from each file and import them in others to use. So when a container depends on a data domain, we may need to import its reducer, actions and selectors. ex.
import CallReducer from ‘../data/calls/CallReducer.js’;
import * as CallActions from ‘../data/calls/CallActions.js’;
import * as CallSelectors from ‘../data/calls/CallSelectors.js’;
This requires a lot of imports with relative paths and the internal structure of the data domain /data/calls is exposed. A better way is to encapsulate the data domain by creating an index.js that exports the internal files of this module.
import CallReducer from ‘./CallReducer’;
import * as CallSelectors from ‘./CallSelectors’;
import * as CallActions from ‘./CallActions’;
This index.js becomes a public API of this data domain and hides its internal file structure from the outside. As a result, dependency on this module in a container will be reduced to only one import:
This looks much better right? You can change the internal structure of the data domain modules without worrying about breaking any dependencies.
Where to put the tests?
At first I put my tests in a separate folder src/test besides the source folder src/app. As I code a lot in Java as well this looks good to me because that’s how things are done in Maven projects (sources in src/main/java and tests in src/test/java).
I quickly realise that it is not convenient at all. The tests are far away from the sources. So each time a source file is modified such as src/app/data/calls/CallReducer.js, I need to navigate to the corresponding test src/test/calls/CallReducer.spec.js. When there’s some refactoring, the imports are often broken and unfortunately we do not benefit from the refactoring capabilities of a Java IDE.
Therefore, I decided to move the tests into the src/app folder so that they are close to the source. In this case, it’s much easier for me not to mess up the import.
If you are using Karma, Karma will scan and find all the test files easily by their extensions. So the move of the tests are pretty transparent.
I really want to share in this post how I organized the file structure of my project and the reasons why I did so. I think that if you start a very simple project on Redux/React, then the organization by file nature will work just fine for you. It may help you to learn the major concepts of Redux. But when things are getting complex, you will find out the limit and adopt other approaches.
After all, what really matters is how to think of your project in terms of data domains, business logic, views, as well as the dependencies and reusability of all these components.