1 Introduction

In general, the whole class of the truth theories may be divided into two categories—the truth theories that refer either to the closed systems or to the open, empirically interpreted ones. The notion of closed systems may be identified, under some conditions, with deductive systems as considered in Tarski (1944) or symbolic constructive systems as in Goodman (1977). These systems contain a given subject language, usually interpreted in abstract models, but a meaning of the terms of the given language does not have to be necessary explained or even determined in detail. This concept should not be confused with the so-called semantically closed languages, which are identical to their metalanguages—see (Leitgeb 2001, pp. 297–303). In the closed systems the truth is defined inside the systems and it is not referred to the real (physical) world (Kouneiher and da Costa 2020). On the contrary, the open systems are empirically interpreted by real processes and phenomena. In such systems the reference to reality, considered in the frame of a certain correspondence, plays a crucial role. Confirmation of a given hypothesis, including scientific, is one of important problems (Kuipers 2016; Luk 2020; Schippers 2017). In this paper the problem of the truth in open systems is considered.

The operationalization of the mentioned correspondence is one of the key problems. In this paper it is solved by using the autonomous system theory. Such approach enables the development of an action design to verify the hypothesis concerning the reality in which the agent operates. Moreover, the cybernetic construct of an autonomous agent allows the researcher to consider a wide class of cognitive entities, which, in the previous approaches, have been limited only to human beings as cognitive subjects.

The problems discussed in this article refer to the following fundamental problems of empirical aletheiology:

  1. 1.

    What the truth is—the problem of definition of the truth.

  2. 2.

    How the truth can be expressed—the problem of the truth bearings.

  3. 3.

    Why a judgement is true—the problem of the truth criterion.

  4. 4.

    How it can be tested whether the given judgement is true—the problem of the truth verification.

  5. 5.

    What ways of cognition of the truth are justified—the problem of methodology of finding the truth.

  6. 6.

    What the specifics and the functionality of the subject that tries to find the truth is.

Currently, a new stream of investigations can be observed. It consists in looking for inspirations to enlighten the problem from new perspectives. The studies conducted “from the point of view of a systematic and general theory of rational goal-setting which has its roots in management science” (Olsson 2017), in particular in the context of decision making (Babic 2019; Konek and Levinstein 2017; Levinstein 2017), can be put as examples of such approach. The way in which the problem is considered in this paper is based on the idea worked out in systems theory and lies in the aforementioned stream of the studies. Moreover, the presented solution refers to the idea according to which studies that concern the truth are connected with researches that concern perception and knowledge (Beni 2017). The proposal put forward in this paper is based on the concept of autonomous systems and the way they act in their environment. The concept of contextual truth refers to the adequacy of the model of the real world—the autonomous agent’s environment. The objectivization and proposal of theoretical concept of the cognitive subject as well as the concept of the reference are the key points of the approach presented in this article. This paper refers strongly to the problems discussed in Olsson (2017) and, as a consequence, in Engel (2007) and Rorty (2007). Furthermore, the reference to classical theories of the truth is considered. Especially, the relationship between the proposed approach and pragmatism, contextualism, relativism and realism is shown.

In empirically interpreted truth theories the reference to reality, considered in the frame of a certain correspondence, plays a crucial role. The concept of the aforementioned correspondence remains, however, undefined (O’Connor 1975). The problem with such definition is connected with the nature of representation of reality as well as with the way of expressing this representation (Fumerton 2002). Furthermore, in referential theories of the truth there are crucial difficulties with the truth criterion. Two judgements as well as two facts can be compared. There are crucial problems, however, when a judgement has to be compared with a fact. Moreover, there are problems with the truth criterion for negative judgement because the facts, that would correspond with these types of sentences, does not exist. There were numerous attempts of solving the specified problems but the results are far from satisfying. The lack of proper tools remains among the crucial reasons for such situation. Bocheński stressed that, in philosophy, opaqueness is caused by lack of precise, adequate methods—see (Bocheński 1968, Chapter 3). Therefore, providing precise tools to study of the problem of the truth in open systems was the main motivation for these studies. In particular, working out precise methods for:

  • analysis of the concept of the truth in open systems,

  • analysis of the process of cognition,

  • analysis the specificity of the cognitive subject,

  • analysis of the reference, first of all, the correspondence in open systems

are the aims of this paper. The starting point of the presented considerations is following. We are doomed for subjective cognition but it, in a way, reflects the objective truth, generated by the existing world. As a consequence, ontological theses have logical value relatively to the way the world is expressed (Goodman 1960, 1975, 1977).

This paper is organized in the following way. The foundations of the autonomous system theory are put forward in Sect. 2. The proposed concept of the truth and the discussion can be found in Sects. 3 and 4, respectively. Section 5 includes the concluding remarks.

2 Autonomous Systems Theory

In general, in cybernetics, the system is the basic concept. It is a unit which acts in its environment. The system is isolated from the environment by the well-defined borders and communicates and interacts with it by using input and output modules. The obtained signals, energy and matter are processed inside of the system. The system is controlled externally or internally. In the latter case the cybernetic system is autonomous. The theoretical foundations of such systems was proposed by a Polish cybernetician Marian Mazur and applied by him in psychology (Mazur 1976). Recently, the Mazur’s approach was applied to analysis of health-care systems at the national level (Bielecki and Stocki 2010; Bielecki and Nieszporska 2019) as well as to analysis of the life phenomenon (Bielecki 2015). The last paper deals with philosophy of biology. In this paper the Mazur’s theory is applied to the analysis of various aspects of the notion of the truth by using it as a theoretical frame of the cognitive subject.

Let us recall the foundations of the autonomous systems theory. The details can be found in the papers (Bielecki 2015; Bielecki and Stocki 2010) and the monograph (Mazur 1976). The autonomous system, called also the autonomous agent, is the kind of a cybernetic system which has a specific organization—see Fig. 1—and consists of the following basic elements.

  • Alimentator is the input module that obtains energy from the environment.

  • Receptor is the input module that obtains signals from the environment and translates them into the internal code of the system.

  • Effector is the output module that generates reactions of the system.

  • Correlator is the inner module that organizes, processes, updates and stores information and knowledge. Information and knowledge processing as well as their updating are connected with learning abilities that are also the functionalities in which the correlator is equipped.

  • Accumulator is the inner module that stores and processes resources.

  • Homeostat is the inner module that controls the whole system, among others secures functional balance of the system. The homeostat creates its own goals and thus gives it total autonomy.

Fig. 1
figure 1

The autonomous system according to Mazur—see (Mazur 1976)

The receptor, correlator and effector constitute the information line of the system, whereas the alimentator, accumulator and effector constitute the energetic line of the system. The fact that the autonomous system creates its goals by itself and has the freedom of action is its crucial property. The autonomous agent has insight into its inner states at least to the extent necessary for realization its goals. This insight is realized by the homeostat.

In the frame of cybernetics the concept of the knowledge can be seen in a slightly new light. Explaining the issue from the beginning, the signal is a measurable physical quantity that is put onto the input of the autonomous system. The stimulus is a signal that changes the state of the input and, as a consequence, is sent to the correlator. The information is a stimulus, usually structuralized, which can change the state or the structure of the correlator. The knowledge is a model of the environment encoded in the correlator in the form of its structure or state.

It should be stressed that in the proposed approach both the information and the stimulus should be considered from a given autonomous system point of view. This means, in turn, that something, what is information or a stimulus for one autonomous system can be neither information nor the stimulus for another one. For instance, let us consider the ovum as an autonomous system. Then, the duck spermatozoon carries crucial information for the duck’s ovum and does not carry any information for, let us say, the rabbit ovum, because it cannot change neither the structure nor the state of the rabbit’s ovum.

In order to clarify the idea with an easily understable example, let us consider a language abilities of a human being who speaks a foreign language. Knowledge of grammatical structures, vocabulary, semantic contexts, including idiomatic expressions, constitutes a person’s knowledge. Suppose the teacher teaches the person a new word. Then the teacher’s message is information. This information is built into the existing knowledge system, extending it.

It should be mentioned that the original Mazur’s autonomous systems theory has been generalized in order to apply it directly to analysis of biological systems as autonomous agents (Bielecki 2015), which is the topic exploited in current theoretical biology (Rosslenbroich 2009, 2014; Ruiz-Mirazo and Moreno 2012). This widened version of the theory, however, is not necessary for the studies discussed in this paper.

It should also be stressed that, in epistemology, the implicit assumption is done that the human being is the only cognitive subject. In such context the organization of knowledge, its dynamics, abilities of representation the physical world, the properties of correspondence between the knowledge and the world, including scientific investigations are studued widely—the papers (Anderl 2018; Fumerton 2002; Hohol and Miłkowski 2019; Issajeva 2020; Kouneiher and da Costa 2020; Luk 2017, 2020; Schippers 2017) can be put as examples. In this article, however, other cognitive subjects, both biological and artificial, are considered which is a new contribution to epistemology. Furthermore, the cybernetic approach enables us to utilize in epistemological studies the formal concepts of knowledge, worked out both in computer science in reference to artificial intelligence and in cognitive psychology. Thus, in artificial intelligence the intelligent system has to be equipped at least with a basis of knowledge, inference module and working memory, organized in the way presented in Fig. 2. Thus, this is the minimal modular organization of the correlator. The basis of knowledge can be organized as a formal structure based on various types of logic, including fuzzy ones—so called rule systems (Ligȩza 2006), graphs—semantic nets (Collins and Quillian 1969) and causal maps (Chabib-draa 2002), frame systems (Minsky 1975) can be put as examples. Comprehensive discussion of artificial intelligence systems can be found in Flasiński (2016).

In this paper the term cognitive subject denotes the autonomous agent that creates in the correlator the model (the models), perhaps extremaly simple, of the experienced environment. The proposed approach enable the researcher to analyse precisely the cognitive subjects according to their abilities to reflect various aspects of the real worls in their models—see Sect. 4.3.

Fig. 2
figure 2

The minimal cognitive system according to the theory of artificial intelligence—see (Flasiński 2016, Chapter 9)

3 The Truth in Autonomous Systems

In this section the proposed concept of the true is presented. In the first subsection the basics are introduced, whereas in the second one the possibilities applications are shown. Let us specify the basics of the proposed concept of the contextual truth.

In referential concepts of the truth the cognition process is verified empirically. The fact that this verification is done by using senses and models of reality, which means that we can find the truth only subjectively in the frames of the way we can express the reality, is the main problem of this verification (Goodman 1960, 1975).

Let us consider a well-known simple example.

Fig. 3
figure 3

The scheme of the truth criterion in autonomous system in the context of the chosen goal

Example 1

Let us assume that someone is observing a spoon put into a glass filled with water. The sense of sight informs the observer that the spoon is broken at the surface of water. On the other hand, the observer, without taking out the spoon of water, can use his sense of touch and find that the spoon is not broken. Thus, one sense informs the observer that the spoon is broken, whereas the other one denies it. The observer, basing on his physical knowledge, concludes that the spoon is not broken but the light refracts at the border of two mediums. On the other hand, it is possible, at least hypothetically, that the observer is not equipped with physical knowledge but believes in magic and, as a consequence, concludes that water has such magical properties that the spoon is broken when it is put into the glass with water and it is automatically glued again when it is taking out of the glass filled with water.

The above example illustrates the crucial role of a model in interpretation of the reality that can be interpreted only in the frame of a certain model.Footnote 1 Therefore, in the proposed concept of the truth, adjudication about truthfulness is replaced by evaluation of the model adequacy. Considering the problem in the frame of autonomous systems theory allows us to formalize the proposed approach. Let us analyze the way in which the autonomous system acts.

The system generates its own goal by itself, which is an immanent property of the autonomous system. On the one hand, the goal achievement is connected with efficient performing an action in the environment. On the other hand, it is connected with achieving a proper inner state of the agent. Let us consider the following example.

Example 2

Let us assume that a predator, for example a fox, is hungry and, as a consequence, it want to caught a rabbit. As it has been aforementioned, a certain action in the environment has to be performed—the rabbit has to be localized and caught. The fox predict, not necessarily consciously, that satiation its appetite will be the effect of performing this action. Thus, the specified action is subordinated to the main goal—satiation appetite, which means that a certain inner state of the system should be achieved—the state of satiety in the discussed case. After performing the action the fox tests, reflexively, not intentionally, whether the predicted inner state has been achieved.

In order to achieve the goal, the agent performs an action that is generated by the agent on the basis of analysis of the way in which the goal can be achieved and pieces of information from the environment interpreted in the frame of the model of the environment (the world). The inner state of the agent that has been attained as a result of performing the action is compared with the state which would be attained if the goal was achieved—see Fig. 3. If the difference between these two states is sufficiently small, then the used model is adequate in the context of the performing task i.e. in the context of achieving the assumed goal. In the other case, the model should be modified or changed.

To sum up, the model in the agent’s correlator corresponds to the environment so that the agent can achieve the goal by using the action design that is based on the model. It should be stressed that the model is valid in the given range of the parameters. The said range determined the way the external world appears to the agent. This way of appearing, in turn, provides partial knowledge of the Truth which means complete and objective knowledge of the world.

To make the idea more clear, let us consider a simple experimental procedure conducted by a cognitive agent.

Example 3

Let us assume that an experimenter, that is an autonomous cognitive agent, intends to perform a physical measurement. Thus, the agent?s correlator is equipped with cognitive structures that concern the physical theories and models, mathematics, theory of electronics etc. The scientist plans an experiment and constructs an apparatus based on his knowledge—the content of the correlator. Furthermore, also based on the content of the correlator, he predicts in which value range the result should fall. After taking the measurement, the experimenter reads the result and, using real number arithmetic, checks if the result is within the expected range. The last operation is performed as a conscious intellectual operation in which the agent checks whether the sensory impression resulting from reading the indication of the measuring device is the same as the predicted impression of reading the value within the predicted range.

It should be stressed, however, that on the other hand the presented approach implies that the model can be inadequate in two meanings. In the first one, it can be imprecise. In the second meaning, however, it can be, in a way, overdetailed.

Let us consider a subsequent example.

Example 4

Let us consider two agents and let us assume that one of them is chasing the second one. If they are unmanned autonomous wheeled vehicles that move with the velocity of about fifty kilometers per hour then it is enough for them to use classical kinematics to calculate the parameters of the pursuit curve. However, if they are unmanned autonomous spaceships that move with the velocity equal to, let us say, 0.7c,  where c denotes the speed of light, then they have to use relativistic kinematics. Using relativistic formulae in the first case is useless because relativistic corrections would be so small that they neither have any practical meaning nor they even are detectable. In such a case the model would be overdetailed. On the other hand, using classical formulae in the second case would lead to completely uncorrect results. In such a case the model is inadequate.

Thus, on the contrary to all theories of the truth considered so far, the model of the real world can be not only insufficiently true but it can be, in a way, too detailed. The last term means that the aspects of reality that are not manifested at all when executing the action, are unnecessarily included in the model. This can have negative consequences for the agent efficient acting because of the fact that the more detailed model, the greater complexity of information processing necessary to perform the action. Therefore, if the differences of results obtained on the basis of different models do not matter, it is better to use a simpler model, because for real-time agents, the calculation time is an important parameter. It should be stressed, however, that although these considerations are conducted on the basis of the proposed approach, there are arguments that can not be underestimated, arguing for the adoption of simpler models and theories in general (Anderl 2018).

The key point of the introduced proposal is that the assessment of the truth of a judgement is replaced by the assessment of the adequacy of the model of the reality, in the context of the assumed goal. The proposed schema—see Fig. 3—corresponds to inference rules in logic. Thus, let us called this scheme the aletheic schema. The proposed schema is unfailing and corresponds to foolproof inference schemata in logic—see (Bocheński 1968, Chapter 12). It should be also stressed that since the described process of reasoning in biological agents does not have to be aware, the proposed approach can be a good starting point to analyze intuitive reasoning (Climenhaga 2018).

Let us signalize a few subtles in a presented concept. First of all modification of the model of the reality means a few possibilities:

  1. (a)

    Selecting the current values of those model parameters that are variable, e.g. selecting the appropriate value of the friction factor when the agent see that the route has become frosted. This is a minor running match.

  2. (b)

    Changing the values of those model parameters that are constant, e.g. the fox has grown old and its top speed has decreased.

  3. (c)

    Creating a new modules in the existing knowledge base, e.g. a fox has learned to hunt a new species of animal by developing a new hunting tactic.

  4. (d)

    Crucial modifications in the knowledge base, e.g. assuming that the Earth is a sphere moving in space and subjected to the gravitational effects of other cosmic bodies and not, for example, a disk supported by elephants.

  5. (e)

    Building up the knowledge base in such a way that the existing one becomes a fragment describing a specific case, e.g. developing of theory of special relativity and observing the fact that Newtonian physics is a special case of relativistic mechanics if we assume that the speed of the light is as great that the assumption that it is equal to infinity can be made.

Furthermore, the term to achieve the goal should be analyzed in detail. In the case of predators one attack in several is successful, for example, for cheetahs one in seven. The predator should modify its world model not when a single attack fails, but when it is threatened with starvation.

To sum up, creation of the mechanism of inference about the properties of the real world on the basis of the analysis of the dynamics of the inner states of the autonomous system is the most important point of the proposed approach. Furthermore, each of the notions of the cognitive subject, the truth, the truth bearings and the knowledge has been redefined.

In the light of the proposed method, the basic problems of aletheiology specified in Introduction can be put forward in the following way.

  1. 1.

    The truth is the adequacy of the model in the context of the set goal.

  2. 2.

    Minimal modules in the knowledge basis that can be verified by executing an action and, as a consequence, comparing the predicted state and the achieved state of the agent are said to be the truth bearings.

  3. 3.

    The difference between the predicted inner state of the whole system and the achieved state of the system is the truth criterion i.e. the criterion of the adequacy of the used model.

  4. 4.

    The truth verification i.e. in the proposed method the verification of the adequacy of the model in the context of the planned task, is done by performing the action and the comparison of the predicted inner state and the achieved inner state of the cognitive subject (autonomous agent).

  5. 5.

    Designing the action by which the cognitive subject is going to achieve the assumed goal is the methodology of finding the truth. The action design is created on the basis of the knowledge that resides in the correlator and generates the models of the world. Therefore the action efficiency is the verification of the model adequacy.

  6. 6.

    The Mazur’s autonomous system is the cognitive subject.

  7. 7.

    The classic problem: ”How the sentence ’the cognitive subject S knows that \(\phi\)’ should be understood”, is solved, in the proposed approach, in the following way:

    1. (a)

      \(\phi\) is the fragment of the content of the correlator such that there exists at least one goal, there exist conditions, for instance a range of parameters, and there exists the action design created on the basis of the \(\phi\) such that the goal is achievable by using the action design provided that the conditions are satisfied. In other words, \(\phi\) is a truth bearing (a module of a knowledge base of S), or it is a structure in the correlator.

    2. (b)

      \(\phi\) is encoded in the correlator of the autonomous cognitive agent S.

    3. (c)

      S performed the designed action and, as a result, S achieved its goal, which manifests by achieving the inner state the agent assumed before performing of the action.

4 Discussion

In this section four problems are discussed. First of all, the relations between episteme, that reflects the objective truth—the Truth, and doxa, which is a cognitive frames being a direct result of expirienced sensory stimuli, are considered in the frame of the proposed approach. It seems that the proposed approach sheds a new light on the above issue. Then, the general problems of theory of truth and the Truth are considered. In the third subsection the hierarchy of cognitive agents that emerges from the proposed approach, is discussed. In the last subsection the relationship of contextual truth with various philosophical concepts is discussed.

4.1 Episteme, Doxa and Techne from the Perspective of Contextual Truth

Since antiquity there has been a discussion in philosophy about the nuances of definitions and the interrelationships between episteme, doxa and techne. In this publication, the entire historical context of these studies is omitted. Only new aspects of the issue are considered that emerge from the application of autonomous systems theory.

In the frame of the proposed approach techne is the ability of the autonomous agent to create the appropriate action design. Doxa is the content of the correlator resulting from the impact of the environment on an autonomous agent. Thus, an agent is able to create new pieces of doxa. During the operation of the autonomous system, doxa changes—it is verified, supplemented and, most importantly, the system acquires knowledge about in what types of tasks a given fragment of doxa turned out to be effective and which one did not. The last one gives the agent the opportunity to analyze the objective reality, to which he/it has only indirect access, through the senses. It is a bit like a technical drawing, where projecting an object onto various planes allows it to be fully reconstruced. Thus, it can be said that doxa is a projection of episteme onto the ontological plane on which the task is realized. The said plane is determined by the range of parameters—for instance the range of the speed in Examples 2 and 4 in the previous section. As a result, doxa, acquired in various cognitive contexts by using techne gives, at least some types of the cognitive agent, a chance to reconstruct the objective reality—the Truth. To sum up, in the proposed approach achieving a goal by using the action design, created on the base of the knowledge, is a crucial construct in the proposed approach. This refers to the problem of empirical adequacy and, as a consequence, understanding and managing the world (Bhakthavatsalam and Cartwright 2017). In order to analyse the problem more precisely, let us consider the following example.

Example 5

Let us assume that an autonomous agent would like to collect knowledge about the forest. In the case of the human being, the agent can study the biological cells of the forest plants and animals by using the microscope. Then, he can view a single tree from a distance of several meters. Subsequently, he can take a walk in the forest combined with observation of the terrain and phenomena occurring in the forest. Finally he can see the whole forest from a bird’s eye view. In each of the above cases, the agent gains another fragment of doxa by performing another task, in this case, cognitive. The human agent can try to create full, objective knowledge of the forest from the fragments obtained. This knowledge can be then verified by completing another task—e.g. attempting to introduce a new plant species in the forest. The action design for this task will be made based on the created knowledge. The effectiveness of this action will be a test of the truthfulness of the knowledge—the agent will compare the predicted results with those obtained.

The above example is intended to illustrate the possibility of using the concept of contextual truth in epistemological analysis of the specifics of obtaining scientific results and testing scientific theories. To clarify the said metaphor with the projection, it can be said that the realization of individual research tasks—in the above example related to the change of scale—the individual study can be compared to obtaining a single cross-section in a tomographic study and the construction of knowledge, reflecting objective truth—the Truth—to the reconstruction of a three-dimensional object on the basis of individual sections. Let us emphasize that the term the Truth is understood in this article in the sense of objective truth, that only regards the environment in which the cognitive agent operates. Therefore, the presented approach can be applied in philosophy of science but is rather useless in metaphysics.

It should be also stressed that the proposed approach can also be used to analyze the specifics of the human being as a cognitive agent. It is obvious that, for instance, animals have not got possibilities of investigation of forest in dependence of the scale as in the example above. The forest is something different for the bacteria living in it, something different for the ant, something different for the woodpecker and something different for the wolf. Each of the said autonomous agents has a specific cognitive perspective beyond which it cannot go out and, as a consequence, has access only to doxa, without the possibility of attempting to reconstruct the episteme, which is a reflection of objective truth—the Truth.

The proposed approach to the problem, in short, consists in the schema: a model of a fragment of the world, the action design created on the basis of the model, verification the range of the parameters in which the action enables the agent to achieve the aim and analysis of the complexity of the information processing that is necessary to achieve the goal. This approach provides a specific basis for settling the case of doxatic disagreement (Fabienne 2019).

An additional advantage of the proposed approach is the possibility to experimentally test the creation of knowledge in an autonomous robot. In this case, we can have direct insight into the whole process of constructing pieces of doxa in the correlator and the opportunity to experiment and observe to generalize acquired fragments of knowledge, which is, in a way, an attempt to crate episteme.

4.2 The General Problems of Theory of Truth from the Perspective of Contextual Truth

In the frame of the proposed contextual truth concept, the general problems of theory of truth (see Künne 2003, pp. 2–21) can be elucidated in the following way.

  1. 1.

    As it has been aforementioned, according to classical approach, sentences, propositions or judgments are truth-bearings—see (Püntel 2008, pp.  226), in more formal depiction the problem refers to predicate calculus (Püntel 2008, pp. 226–227). It is out of discussion, however, that the truth-bearings are situated as a part of a property of the language which describes the real world. In the approach proposed in this paper, the basis of knowledge, which is a part of the correlator, is the truth-bearer. Since the basis of knowledge is a formal structure, this is a generalization of Puntel’s conclusion, who sums up the discussion in the following way: “Within the current systematic framework, an additional qualification may be added: sentence and proposition are truth-bearings (in very broad sense) only because they are, in the final analysis, structures...”. It should be stressed explicitly, that the base of knowledge in the proposed approach is not restricted to language aspect, but it can be a formal structure based on mathematics, computer science or cybernetics.

  2. 2.

    The description of a cognitive subject by reference to autonomous system theory is a crucial advantage of the proposed idea. Such approach enables, at least partially, to solve the classical problem—see (Chalmers 1977, 1996; Ingarden 1964; Magee 2002): If only mental states exist, what does substantiate my conviction that something else exists?

  3. 3.

    It has been pointed out that the predicate is true is incomplete and it can be completed dependently on circumstances (Kokoszyńska 1951). Such relativism has been studied so far in the context of the language analysis. In the approach proposed in this paper it is studied in the context of control in the frame of autonomous sytems concept.

  4. 4.

    It has been pointed out that in the relative truthfulness the logical value of the statement can be dependent on the context in which the judgement was stated (MacFarlane 2005). In the proposed approach this context is specified explicitly.

  5. 5.

    The proposed approach elucidates the problem stated by Ingarden whether the senses reflect the real world in an adequate way (Ingarden 1964). Furthermore, in the same monograph, Ingarden stated that a new experience can force a person to revise the judgements created on the basis of the previous experience. The approach, in which the possibility of changes in the basis of knowledge is one of the key points of the aletheic schema, proposed in this paper, refers to this problem.

4.3 The Autonomous System as a Cognitive Subject

The autonomous systems theory as well as the results obtained in computer science and cybernetics provide us the precise tools for classification autonomous agents according to their cognitive abilities and functionalities. Thus, the following hierarchy of autonomous systems considered as cognitive agents can be distinguished.

  1. 1.

    The simplest autonomous agent can only create behaviours that directly support existence and remove threats. In this type of agents the pieces of doxa that reside in the correlator have a character of a stimulus-reaction scheme. Let us call autonomous agents equipped with only such cognitive abilities reflex agents. Bacteria are biological examples of reflex agents. It should be stressed, however, that even bacterial stimulus-reaction schemata can be quite complex (Lyon 2017).

  2. 2.

    Let us call an associative agent such type of the autonomous agent that is able to carry out a simple analysis of direct cause-and-effect relationships. In this case, the said cause-and-effect relationships must be modelled in the correlator by using, so called, associative memory. The correlator equipped with such functionality is able to use simple implications in inference process, for instance: “If I extend my arm with a stick, then I will reach a banana that I cannot reach at the moment.” It should be stressed that it is not postulated that the agent is equipped with language skills allowing him (it) to express inference as in the above sentence. The said functionality only means that the inference process of the agent corresponds to the said sentence. Such models enable the conscious agent to use elements of the environment as simple, unmanufactured tools. Mammals, first of all, apes are biological examples of associative agents.

  3. 3.

    Conscious agent is able to model complex cause-and-effect chains. The inference module in the correlator in this type of the autonomous agent is equipped in conditional mode, not necessarily in linguistic form. This allows the agents to model various variants of future events and, as a consequence, to work out complex startegies of activity. The knowledge of this type of agent has strong semantic aspect. The biological example was probably pitecanthrope, who was the first to use fire. This skill was an example of effective inference using a complex cause-and-effect chain.

  4. 4.

    Self-conscious agent is able to change the epistemic perspective–see Example 5. As a consequence, he is aware of the existence of episteme and the difference between episteme and doxa. Getting to know episteme is his conscious goal, created by the homeostat. This agent is aware of the problem episteme veracity but does not have reliable tools to verify it. The human being is an example of this type of agent.

  5. 5.

    The hypothetical omniscient agent has a trustworthy episteme proven by reliable criteria. In addition, he has the proof of the reliability of the used criteria.

4.4 Relationship of Contextual Truth with Various Philosophical Concepts

Various aspects of the truth theory is analysed in many philosophical systems. The approach proposed in this paper refers strongly to a few of them. Let us discuss the interrelations between the proposed idea and the existing approaches to the truth in various philosophical systems. Let us also discuss various aspects of the proposed contextual truth theory.

  • Pragmatism The propsed approach refers strongly to realization of a goal. This reference is the very core of pragmatism: “...after pointing out that our beliefs are really rules for action, said that to develop a thought’s meaning, we need only determine what conduct it is fitted to produce: that conduct is for us its sole significance” (James 1907, Lecture 2). In the proposed cybernetic approach the content of the correlator—beliefs in the James’s view—is the basis on which the action design is created. Furthermore, as in the pragmatic approach, it is its sole significance. In the proposed approach, the truth is called contextual, because it is considered in the context of the goal. It is in concordance with pragmatism, in which the truth is related to the situation (James 1907, Lecture 6). Furthermore, in the frame of the cybernetic approach, it is specified precisely what does it mean that the recognition of the truth should be expedient. John Dewey associated the truthfulness of knowledge with the effectiveness of action (Dewey 1938), what is consistent with the proposal put forward in this paper. Dewey, however, rejected the concept of absolute truth which is different with the proposed contextual truth idea, according to which the absolute, objective truth exists as the objective properties of the environment in which the cognitive subject acts. The contextual truth can be regarded as the projection of the absolute truth onto the epistemological subspace generated by the realization of the task—see Sect. 4.1.

  • Realism The proposed approach refers to the cognitive subject that operates in an objectively existing environment. The agent perceives its environment. Criterion of the rational acceptability of a given model is one of the crucial problem in philosophical realism—see (Putnam 1982) and Massimi (2018a, b), in the context of philosophy of physics. In the proposed approach the rational acceptability is alghorithmized.

  • Contextualism In classical, functional (epistemic) contextualism (see Pepper 1942) two crucial assumptions are made:

    1. (a)

      Each phenomenon is a dynamic process that always takes place in a specific context.

    2. (b)

      Contextualism abstracts from the existence of objective reality and recognizes that the truth of ideas and conclusions is related solely to their functionality and usefulness. Ontological and epistemic assumptions cannot be verified nor justified—they can be only accepted or rejected arbitrarily.

    In the proposed approach the first assumption is accepted and the said context is clarified—it is the action performed in order to achieve the goal. Moreover, the term achieve the goal is also more specific and, in part, redefined—it means not only achieving a specific effect in the autonomous agent’s environment, but first of all achieving the assumed internal state of the acting agent. The second assumption, however, has not been accepted. First of all, in the proposed approach the existence of the objective reality is postulated and the individual components of the model should reflect properly at least these aspects of the reality that are crucial in the context of the executed task. Thus, the mentioned components are not only tested by an agent that performs actions designed on the basis of the said model but also modified or changed in the case they prove to be inadequate.

  • RelativismA real relativist is someone who takes proposition truth to be relative to some other parameter, in addition to worlds and possibly times” (MacFarlane 2005). In the approach described in this paper, the model adequacy depends of the range of parameters for which the action is executed, for instance, the range of speed in Example 4. Thus, the appraoch has relativistic aspect. This, however, does not mean that the truth is relative but, only, under certain conditions it manifests some of its aspects and others not. A thorough examination of this issue makes an important contribution to the reconstruction of episteme—see Sect. 4.1.

  • Functional aspect of the proposed approach is manifested in the fact that the action design is created by the autonomous agent and then it is performed by it in order to achieve the assumed goals.

  • Linguistic aspect The language plays a crucial role in epistemology. The descriptive power of the language, the possibilities of its formalization, as well as its syntactic and semantic aspects, first of all in the context of the judgements as truth bearings are studied intensively (Popper 1955; Tarski 1944, 1969; Popper 1963, pp. 662–663), (Popper 1972, p. 519). In the approach proposed in this paper the language is replaced by data structures (at the lower level) and the knowledge base (at a higher level). They exist both in artificial autonomous systems, such as autonomous robots, as well as in biological systems, first of all in human beings. In the latter ones they have the form of cognitive structures. Formal models of cognitive structures are common for both artificial and biological autonomous agents, for instance rule systems, frame systems, semantic nets, causal maps—see, for instance, (Chabib-draa 2002; Flasiński 2016). This will allow us to apply the results obtained in cognitive science, artificial intelligence, psychology and biology to the analysis of the structure of the correlator and its dynamics.

  • Subjectivistic aspect is present in the proposed approach because the achieved inner state is the reference point for verification of the used model adequacy.

  • Coherent aspect is present because the problem of fitting the model to new pieces of information, especially if the assumed goal has not been achieved, is one of the key problems in the proposed approach. It should be stressed that in the context of the proposed method the coherent aspect is connected with the dynamics of the knowledge base. This problem is, at least partially, worked out in psychology, computer science and cybernetics, as well as in philosophy of science. In psychology, for instance, Piaget, who referred his studies to developmental psychology, studied the dynamics of cognitive structures in the context of experience of the cognitive subject (Piaget 1975). He described assimilation and accommodation as the elementary processes of the cognitive structures. In the frame of the approach proposed in this paper, the assimilation means recording new facts in the knowledge base and, sometimes, a small change of the basic parameters. The accommodation means, at least, the reorganization the knowledge base. In computer science and cybernetics machine learning provides good tools for the studies that concern dynamics of the knowledge base in the context of the performed task. In philosophy of science it is considered “how evidence accumulates across theory change, how different evidence can be amalgamated and used jointly, and how the same evidence can be used to constrain competing theories in the service of breaking local underdetermination” (Boyd 2018). In the methodology of science development the problem of creating a new theory such that the old one is its border case is well known—for instance classical mechanics is the border case of relativistic mechanics if we take \(c\rightarrow \infty ,\) where c is the speed of the light. On the basis of the proposed approach, the above issue is related to the modification of the correlator in the light of the results of the performed action. This is a non-trivial problem and it is planned to be the topic of the separate article.

5 Concluding Remarks

The proposed contextual concept of the truth relates to the model of the world. The concept has a well-defined verification criterion which overcomes the problems connected with the referential truth theories. This is a new perspective in the studies that concern the truth which is defined as the adequacy of the model of reality in the context of achieving the assumed goal. This adequacy is verified by comparing two inner states of the autonomous system, which is a cognitive subject. It turns out that in the context of self-control of the cognitive subject (the autonomous system) the model can be both insufficiently true and overdetailed, which constitutes two types of the model inadequacy.

The concept of the cognitive subject, which was only the human being in the theories founded so far, was broadened. Furthermore, the formalization of the truth bearings allows the researchers to do a precise classification of the truth bearings regarding their descriptive power, similarly as various logics are classified. As a consequence, the precise classification of the types of cognitive subjects, according to their cognitive possibilities, can be done. For instance, in the frame of the proposed approach, the bacterium also acquires fragments of doxa and is equipped with a certain type of techne (Lyon 2017) because it is the autonomous agent (Bielecki 2015).

Note that the contextual truth in the meaning presented in the proposed approach is referred to the objective truth (episteme) which is a significant novelty—see discussion at the end of Sect. 4.1. The idea that the analysis of a range of parameters in which the model is both adequate and not too detailed can be the basis for an attempt to create a more adequate model of objective truth seems to be a new proposal indeed.

It should be stressed that the presented approach can be potentially addapted in the studies in which the truth and its bearings and verification procedures are analyzed. Philosophical analysis of empirical procedures in science as well as analysis of truth in law can be put as examples—see, for instance (Królikowski 2006) in the context of legal systems.